There are places where: a) weather predictions are unreliable, b) there is scarcity of water. Just making the right decision on at what hour to water is a huge monthly saving of water.
Need is a very strong word. We don't need a lot of we have today.
But as a hobbyist I would prefer to program in an LLM than learn a bunch of algorithms, and sensor readings. It's also very similar to how I would think about it, making it easier to debug.
I think there’s two schools of thought. The models will get so big everyone everywhere will use them for everything and they will make lots of money on api calls. The models will get cheaper and cheaper computationally on inference that implementing them on the edge will cost nothing and so an LLM will be in everything. Then every computational device will have one as long as you pay a license fee to the people who trained them.
In a greenhouse operation with high-valued crops. Automated control technologies in those applications have been around for decades, and AI is competing with today’s sophisticated control technology designed, operated and continually improved by agriculturists with detailed site-specific knowledge of water (quality, availability, etc.), cultivars, markets, disease pressures, etc.. The marginal improvements AI can make in a process of poor data quality and availability, an existing, finely tuned, functioning control system, and facing the vagaries of managing dynamic living systems are…tiny.
The solution for water-constrained operations in the Americas is move to a location with more water, not AI.
For field crops…in the Americas, land and water is too cheap and crop prices are too low to be optimized with AI at the present era. The Americas (10% of world pop) could meet 70% of world food demand if pressed with today’s technologies…40% without breaking a sweat. The Americas are blessed.
Talk to the Saudis, Israel, etc. but, even there, you will lose more production by interfering in the motivations, engagement levels and cultures of working farmers than can be gained by optimizing by any complex opaque technological scheme, AI or no. New cultivars, new chemicals, new machinery even…few problems (but see India for counter examples). Changing millennia of farming practice with expensive, not-locally-maintainable, opaque technology…just no. Great truth learned over the last 70 years of development.
Just as the other comment "have to" is a very strong word. But there are benefits to it: a) adaptability to local weather patterns, b) no access to WiFi in large properties.
I see. I guess it all boils down to how low power you can make this.
Keep in mind that there are other wireless communication systems that are long range and low power that are specifically designed to handle this scenario
If we're paying for reasoning tokens, we should be able to have access to these, no? Seems reasonable enough to allow access, and then we can perhaps use our own streaming summarization models instead of relying on these very generic-sounding ones they're pushing.
> There's ample evidence that thinking is often disassociated from the output.
What kind of work do use LLMs for? For the semi technical “find flaws in my argument” thing, I find it generally better at not making common or expected fallacies or assumptions.
I can see how LLMs contribute to raise the standard in that field. For example, surveying related research. Also, maybe in the not too distant future, reproducing (some) of the results.
Writing consists of iterated re-writing (to me, anyways), i.e. better and better ways to express content 1. correctly, 2. clearly and 3. space-economically.
By writing it down (yourself) you understand what claims each piece of related work discussed has made (and can realistically make - as there sometims are inflationary lists of claims in papers), and this helps you formulate your own claim as it relates to them (new task, novel method for known task, like older method but works better, nearly as good as a past method but runs faster etc.).
If you outsource it to a machine you no longer see it through, and the result will be poor unless you are a very bad writer.
I can, however, see a role for LLMs in an electronic "learn how to write better" tutoring system.
Pretty much yes. Critical analysis is a necessary skill that needs practice. It's also necessary to be aware of the intricacies of work in one's own topic area, defined narrowly, to clearly communicate how one's own methods are similar/different to others' methods.
If I ask for a task, and the output is not the one expected: I ask for the motivation that lead to the bad decisions. Then, ChatGPT proceeds to retry the task "incorporating" my feedback, not answering my question!!
> The "off by one" predilection of LLMs is going to lead to this massive erosion of trust in whatever "Truth" is supposed to be, and it's terrifying and going to make for a bumpy couple of years.
This sounds like searching for truth is a bad thing, but instead is what has triggered every philosophical enquiry in history.
I'm quiet bullish, and think that LLMs will lead to a Renaissance in the concept of truth. Similar to what Wittgenstein did, Plato's cavern or late middle age empiricists.
Pretty sure our identify is just that of the actor behind our own actions.
i.e. Our brain models causal relationships, sees the correlation between it's own pre-action thoughts and the following action, and therefore models itself as an actor/causal agent responsible for these thoughts and actions.
reply