It's more hallucination in the sense that all LLM output is hallucination. CoT is not "what the llm is thinking". I think of it as just creating more context/prompt for itself on the fly, so that when it comes up with a final response it has all that reasoning in its context window.
We don't really know that. So far CoT is only used to sell LLMs to the user. (Both figuratively as a neat trick and literally as a way to increase token count.)