NN based sentiment analysis is certainly a lot better than non-NN based techniques.
Classification depends on the problem (and mostly the datasize). Boosting is certainly competitive on tabular data and widely everywhere I've worked.
No one talks about it (except on Kaggle) because it's pretty much at a local maximum. All the improvement comes from manual feature engineering.
But modern techniques using NNs on tabular data are are competitive with boosting and do away with a lot of the feature engineering. That's a really interesting development.
> NN based sentiment analysis is certainly a lot better than non-NN based techniques.
I wouldn't say this. Sentiment analysis trained on the standard datasets is one place where performance is barely better than old-school linear classifiers. They remained brittle and easy to trick until recent flexible systems systems based on question answering, zero-shot entailment or lotsa instruction finetuning (improving in that order). I strongly advice against using something fine-tuned solely on sentiment datasets. It'd be a total waste.
> Sentiment analysis trained on the standard datasets is one place where performance is barely better than old-school linear classifiers
Well yeah. But why would you do that?
Do what eveyrone does: Train on large scale a language corpus (or use a pre-trained model) then finetune for sentiment analysis.
> I strongly advice against using something fine-tuned solely on sentiment datasets
Did you mean trained on sentiment datasets? I agree with that.
Otherwise, well [1] is a decent overview of the field. I think Document Vectors using Cosine Similarity[2] at 17 is the highest rated that isn't a NN trained on large corpus and fine-tune on sentiment task. Even that uses document vectors that are trained on a large language corpus.
No, I meant finetuned. I also meant finetuned when I said trained. Experience with applying finetuned sentiment classifiers on real world data found gain vs cost of running to not be worth it. They remain nearly as brittle as cheaper classifiers and have a habit of gloming too much unto certain adjectives. They are also prone to overfitting on finetuned data's domain. Transformers trained not specifically on sentiment but on general domains like question answering or entailment are just leagues better for sentiment tasks.
Sentiment analysis, classification.