Phoenix ingests any opentelemetry compliant spans into the platform, but the UI is geared towards displaying spans whose attributes adhere to “openinference” naming conventions.
There are numerous open community standards for where to put llm information within otel spans but openinference predates most of em.
The main thing was wrestling with the instrumentation vs the out of the box langfuse python decorator that works pretty well for basic use cases.
It’s been a while but I also recall that prompt management and other features in Phoenix weren’t really built out (probably not a goal for them, but I like having that functionality under the same umbrella).
Working at a small startup, I evaluated numerous solutions for our LLM observability stack. That was early this year (IIRC Langfuse was not open source then) and Phoenix was the only solution that worked out of the box and seemed to have the right 'mindset', i.e. using Otel and integrating with Python and JS/Langchain. Wasted lots of time with others, some solutions did not even boot.
I suppose it depends on the way you approach your work. It's designed with an experimental mindset so it makes it very easy to keep stuff organized, separate, and integrate with the rest of my experimental stack.
If you come from an ops background, other tools like SigNoz or LangFuse might feel more natural, I guess it's just a matter of perspective.