I'm glad you have a grandma that plays games. My parents (younger than your grandma) think games are evil. I don't think anything will change their mind.
I feel like there's a weird kind of effect where this kind of spam always used to exist, but it lived somewhere in the long-tail, on page 23 of the results. Now these things are only the front page, and between this effect and the non-literal rewriting of our search intent, the actual needed information is lost in some unreachable part of the index.
What's interesting is that humans can still almost trivially categorize these trash/spam sites from things that are not. This suggests that there is still some possibility of making an automation of some kind that could punish these sites and reward good content that's relatively easy.
I recall an adjacent problem in cyber defense -- trying to discern a squatted domain from a real one. Every sort of heuristic we could think of was pretty easily defeated. But one day, during a meeting, somebody observed that it was absolutely trivial to just go to the site in a browser and tell virtually instantaneously that it was a squatted domain instead of a real site. So we used an image classifier to classify screenshots of sites on raw domains as "squatted or not" and used that classification to flag domains in our raw traffic. It wasn't perfect, but it was easily the most powerful and ultimately straightforward way to attack the problem.
- Maybe add a way for a user to modify/adjust layout of search bars. For example view them in a row or grid versus single column
- Delete a search bar from the list versus replacing it with another search bar. For example, I may just want to see github and google only. But can always add more from the 'more' dropdown menu. Basically adding an 'X' to the left of the search bar