"Proofreading for typos and logic mistakes: I write a fair amount of English documents: ADRs, technical summaries, internal posts, and so on. I never allow the LLM to write these for me. Part of that is that I think I can write more clearly than current LLMs. Part of it is my general distaste for the ChatGPT house style. What I do occasionally do is feed a draft into the LLM and ask for feedback. LLMs are great at catching typos, and will sometimes raise an interesting point that becomes an edit to my draft."
--
I work on Graphite Reviewer (https://graphite.dev/features/reviewer). I'm also partly dyslexic. I lean massively on Grammarly (using it to write this comment) and type-safe compiled languages. When I engineered at Airbnb, I caused multiple site outages due to typos in my ruby code that I didn't see and wasn't able to execute before prod.
The ability for LLMs to proofread code is a godsend. We've tuned Graphite Reviewer to shut up about subjective stylistic comments and focus on real bugs, mistakes, and typos. Fascinatingly, it catches a minor mistake in ~1/5 PRs in prod at real companies (we've run it on a few million PRs now). Those issues it catches result in a pre-merge code change 75% of the time, about equal to what a human comment does.
AIs aren't perfect, but Im thrilled that they work as fancy code spell-checkers :)
Graphite | Senior Software Engineer | NYC (Soho) | ONSITE | Full-time | $160k–$220k + Equity | https://graphite.dev Hi everyone! Graphite (Neo-backed company) is expanding our engineering team.
We raised $20M from Andreessen Horowitz, the founder of GitHub, the founder of PagerDuty, and the creators of React. Our team is 20+ strong with prior experience at Meta, Google, Airbnb, Stripe, DoorDash, Square, and Datadog. We’ve caught strong revenue traction with thousands of customers, including Snowflake, Vercel, Datadog, Asana, Brex, Ramp, and Square. Our whole team is based in our NYC office.
We’re building the world’s best tool for developers to ship code faster and more efficiently, including features like stacked pull requests and AI-powered code review. I’m looking for kind, smart, and passionate folks who would be excited to build a tool that they and all their engineering friends get to use daily!
*What you’ll do:*
- Design and implement new features for our developer platform.
- Collaborate closely with users to understand their needs and improve workflows.
- Work with a small, talented team in a fast-paced environment.
- Contribute across the stack, from frontend to backend.
*What we’re looking for:*
- Strong software engineering experience.
- Passion for developer tools and improving developer productivity.
- Excellent communication and collaboration skills.
- Willingness to work full-time onsite in our Soho, NYC office.
Hopefully helpful, happy to test others for you :)
--
Raw output
--
Understood. I will conduct a detailed technical analysis of next-generation particle collider projects, focusing on the Future Circular Collider (FCC), International Linear Collider (ILC), Compact Linear Collider (CLIC), Muon Collider proposals, and other significant initiatives.
The report will include:
- Planned energy ranges and collision types
- Estimated construction and operation timelines
- Technical advantages and challenges
- Approximate costs
- Key physics goals
- Review of technical design reports and feasibility studies
- International collaboration and support
- Comparative analysis on feasibility, cost-benefit, scientific potential, infrastructure needs, and environmental impact
- Long-term considerations, including upgrade paths and integration with existing facilities
The analysis will be formatted as a structured technical summary, suitable for presentation at a topical meeting of particle physicists, with comparative tables and figures where relevant.
I will update you when the report is ready.
# Next-Generation Particle Collider Projects: A Comparative Technical Analysis
## Introduction
High-energy physics is charting plans for new colliders to extend beyond the LHC and its High-Luminosity upgrade (HL-LHC). The discovery of the 125 GeV Higgs boson has motivated proposals for a dedicated “Higgs factory” lepton collider to study its properties with precision, as well as future energy-frontier machines to search for new physics. Several next-generation collider initiatives are under consideration internationally as of 2024. These include CERN’s proposed *Future Circular Collider (FCC)*, Japan’s *International Linear Collider (ILC)*, CERN’s *Compact Linear Collider (CLIC)*, various designs for a *Muon Collider*, China’s *Circular Electron-Positron Collider (CEPC)* and its successor *Super Proton-Proton Collider (SppC)*, among others. Each proposal differs in collision type (electron-positron, proton-proton, muon-muon, etc.), energy scale, technology, timeline, cost, and physics focus. This summary reviews each project’s key parameters – *planned energy ranges, collision types, timeline, technical advantages/challenges, cost, and physics goals* – based on technical design reports and feasibility studies. A comparative analysis then contrasts their *technical feasibility, cost-benefit, scientific potential for discoveries, timeline to first data, infrastructure needs, and environmental impact*, highlighting the relative strengths and weaknesses of each approach. We also discuss long-term implications such as upgrade paths, flexibility for future modifications, and integration with existing infrastructure.
(Citations refer to official reports and peer-reviewed sources using the format 【source†lines】.)
## Future Circular Collider (FCC) – CERN
- *Type and Energy:* The FCC is a *proposed 100 km circular collider* at CERN that would be realized in stages. The first stage, *FCC-ee*, is an electron-positron ($e^+e^-$) collider with center-of-mass energy tunable from ~90 GeV up to 350–365 GeV, covering the Z boson pole, WW threshold, Higgs production (240 GeV), and top-quark pair threshold (~350 GeV). A second stage, *FCC-hh*, would use the same tunnel for a proton-proton collider at up to *100 TeV* center-of-mass energy (an order of magnitude above the LHC’s 14 TeV). Heavy-ion collisions (e.g. Pb–Pb) are also envisioned. An *FCC-eh* option (electron-hadron collisions) is considered by adding a high-energy electron injector to collide with the proton beam. This integrated FCC program thus spans both *precision lepton* collisions and *energy-frontier hadron* collisions.
- *Timeline:* The conceptual schedule foresees *FCC-ee construction in the 2030s* and a start of operations by around *2040* (as the LHC/HL-LHC program winds down). According to the FCC Conceptual Design Report, an $e^+e^-$ Higgs factory could begin delivering physics in ~2040, running for 15–20 years. The *hadron collider FCC-hh* would be constructed subsequently (using the same tunnel and upgraded infrastructure), aiming for *first proton-proton collisions in the late 2050s】. This staged approach (lepton collider first, hadron later) mirrors the successful *LEP–LHC sequence*, leveraging the $e^+e^-$ machine to produce great precision data (and to build infrastructure) before pushing to the highest energies with the hadron machine.
...
Thank you very much for doing that. It is actually somehow impressive. It got a lot of big picture comparison and points correct. There are problem with some details but overall it does save some work for initial search process.
What I like is that it asked you before clarifying questions before but I wonder if it just generic. Because the prompt mentioned that this would be for "presentation at a topical meeting of particle physicists" but still asked its last question about
> Intended Audience: Should the analysis assume a general physics audience or a more specialized group of particle physicists?
Also probably expected but it didn't include or reference graphs/plots.
what about a full refresh of the page or perhaps jump into the dev tools and check "disable cache"
could also be aggressive caching from cloudflare. could be they're just trying to announce more stuff to maintain cachet and can't yet support all users forking over 200/month.
> "Do You Even Want to Be 10x?
You don’t have to. You’re not a lesser engineer if you operate at a steady clip, write clean code, and value stable processes. Seriously. Some workplaces thrive on consistent, methodical improvement—and a 10x renegade might actually hurt them."
Graphite | Senior Software Engineer | NYC (Soho) | ONSITE | Full-time | $160k–$220k + Equity | https://graphite.dev
Hi everyone! Graphite (Neo-backed company) is expanding our engineering team.
We raised $20M from Andreessen Horowitz, the founder of GitHub, the founder of PagerDuty, and the creators of React. Our team is 20+ strong with prior experience at Meta, Google, Airbnb, Stripe, DoorDash, Square, and Datadog. We’ve caught strong revenue traction with thousands of customers, including Snowflake, Vercel, Datadog, Asana, Brex, Ramp, and Square. Our whole team is based in our NYC office.
We’re building the world’s best tool for developers to ship code faster and more efficiently, including features like stacked pull requests and AI-powered code review. I’m looking for kind, smart, and passionate folks who would be excited to build a tool that they and all their engineering friends get to use daily!
What you’ll do:
- Design and implement new features for our developer platform.
- Collaborate closely with users to understand their needs and improve workflows.
- Work with a small, talented team in a fast-paced environment.
- Contribute across the stack, from frontend to backend.
What we’re looking for:
- Strong software engineering experience.
- Passion for developer tools and improving developer productivity.
- Excellent communication and collaboration skills.
- Willingness to work full-time onsite in our Soho, NYC office.
LLMs need better criticism #
A lot of people absolutely hate this stuff. In some of the spaces I hang out (Mastodon, Bluesky, Lobste.rs, even Hacker News on occasion) even suggesting that “LLMs are useful” can be enough to kick off a huge fight.
I like people who are skeptical of this stuff. The hype has been deafening for more than two years now, and there are enormous quantities of snake oil and misinformation out there. A lot of very bad decisions are being made based on that hype. Being critical is a virtue.
If we want people with decision-making authority to make good decisions about how to apply these tools we first need to acknowledge that there ARE good applications, and then help explain how to put those into practice while avoiding the many unintiutive traps.
"""
LLMs are here to stay, and there is a need for more thoughtful critique rather than just "LLMs are all slop, I'll never use it" comments.
I agree, but I think my biggest issue with LLMs (and a lot of GenAI) is that they act as a massive accelerator for the WORST (and unfortunately most common) type of human - the lazy one.
The signal-to-noise ratio just goes completely out of control.
Isn't it expected that most, if not all, of the content will be produced by AI/AGI in the near future? It won't matter much, if you're lazy or not. It leads to the question, what we'll do instead? People may want to be productive, but we're observing in real-time how world is going shit for workers and that's basically fact for many reasons.
One reason is that it's cheaper to use AI, even if the result is poor. It doesn't have to be high quality, because most of the time we don't care about quality, unless something interests us. I wonder what kind of shift in power dynamics will occur, but so far it looks just like many of us will just lose a job. There's no UBI (or social credit proposed by Douglas), salaries are low and not everyone lives in good location, but corporations try to enforce RTO. Some will simply get fired and won't be able to find a new job (that won't be sustainable for personal budget, unless someone already has low costs of living and is debt-free or has somewhat wealthy family that will cover for you).
Well, maybe at least government will protect us? Low chance, world is shifting right and it will get worse, once we start to experience more and more results of global warming. I don't see scenario, where world is becoming better place in foreseeable future. We're trapped in society of achievement, but soon we may be not able to deliver achievements, because if business can get similar results for fraction of the price (that is needed to hire human workers), then guess what will happen?
These are sad times, full of depression and suffering. I hope that some huge transformation in societies will happen soon or that AI development slows down, so that some future generation will have to deal with consequences (people will prioritize saving their own and it won't be pretty, so it's better to just pass it down like debt).
Sorry but the "lazy is bad" crowd is ludditism in another form, and it's telling that a whole lot of very smart people were passionate defenders of being lazy!
AI systems are literally the most amazing technology on earth for this exact reason. I am so glad that it is destroying the minds of time thieves world-wide!
Going by your other examples, shouldn't it be Luddist and not Luddite? English is famously inconsistent and is a difficult language to learn, owing to it's linguistic heritage, as it's a bunch of exceptions to rules. Like i before e, except after c, but also in a bunch of other words, so every English student just needs to remember those.
Yeah, maybe not the best examples. There are other "-ite" words, but I can't recall off the top of my head any such that also have "-ism" forms. It is, as always with languages, "just the way it is": I've always seen it rendered as "Luddite", never "-ist". (Maybe because it's named for a person, not a thing or principle? Tried to hint at that possibility with his full name.)
Yup, English may be the most inconsistent of languages. When I was a kid, we used to blame French for being "just exceptions to rules, exceptions to exceptions, and exceptions to those exceptions!", but with a few decades of perspective... Nope, English is far worse.
I can think of some runaway scenario's where LLMs are definitely bad but, indeed, this particular line of criticism is really just luddites longing for a world that probably doesn't exist anymore.
These are the people who regulate and legislate for us, they are the risk-adverse fools who would rather things be nice and harmless lest they be bad but work.
Personally, I think my only serious ideology in this area is that I am fundamentally biased towards the power of human agency. I'd rather not need to, but in a (perhaps) Nietzschean sense I view so-called AI as a force multiplier to totally avoid the above people.
AI will enable the creative to be more concrete, and drag those on the other end of the scale towards the normie mean. This is of great relevance to the developing world too - AI may end up a tool for enforcing western culture upon the rest of the world but perhaps a force decorrelating it from the McKinsey's of tall buildings in big cities.
I suspect people don't particularly hate or despise LLMs per se. They're probably reacting mostly to "tech industry" boom-bust bullsh*tter/guru culture. Especially since the cycles seem to burn increasingly hotter and brighter the less actual, practical value they provide. Which is supremely annoying when the second-order effect is having all the oxygen (e.g. capital) sucked out of the room for pretty much anything else.
--
I work on Graphite Reviewer (https://graphite.dev/features/reviewer). I'm also partly dyslexic. I lean massively on Grammarly (using it to write this comment) and type-safe compiled languages. When I engineered at Airbnb, I caused multiple site outages due to typos in my ruby code that I didn't see and wasn't able to execute before prod.
The ability for LLMs to proofread code is a godsend. We've tuned Graphite Reviewer to shut up about subjective stylistic comments and focus on real bugs, mistakes, and typos. Fascinatingly, it catches a minor mistake in ~1/5 PRs in prod at real companies (we've run it on a few million PRs now). Those issues it catches result in a pre-merge code change 75% of the time, about equal to what a human comment does.
AIs aren't perfect, but Im thrilled that they work as fancy code spell-checkers :)