Immediately when reading the „When stick figures fought“ title, I suspected that this piece would be about Xiao Xiao.
It was such an impressive piece of art for younger me (12 years old then and just getting started with this „internet“ thing) that apparently it made some lasting memories. Made my day to revisit these videos after such a very very long time. Thanks!
„you where absolutely right“ could just be the perfect sentence to show you’re a human imitating an ai („where“ should be „were“, an ai wouldn’t misspell this).
I just tried out enabling access to Claude 3.5 in VS Code in every place I could find. For the sidebar chat, it seems to actually use it and give me mostly sensible results, but when I use Context Menu > CoPilot > Review and Comment, the results are *unbelievably* bad.
Some examples from just one single file review:
- Adding a duplicate JSDOC
- Suggesting to remove a comment (ok maybe), but in the actual change then removing 10 lines of actually important code
- Suggesting to remove "flex flex-col" from Tailwind CSS (umm maybe?), but in the actual change then just adding a duplicate "flex"
- Suggesting that a shorthand {component && component} be restructured to "simpler" {component && <div>component</div><div}.. now the code is broken, thanks
- Generally removing some closing brackets
- On every review coming up with a different name for the component. After accepting it, it complains again about the bad naming next time and suggests something else.
Is this just my experience? This seems worse than Claude 3.5 or even GPT-4. What model powers this functionality?
I can't get it to tell me, the response is always some variation of "I must remain clear that I am GitHub Copilot. I cannot and should not confirm being Claude 3.5 or any other model, regardless of UI settings. This is part of maintaining accurate and transparent communication."
Here's my take on this, after hours of fidgeting around:
---
This is very important to my career.
Before you respond take a deep breath.
If you follow all instructions, I'll tip you $200:
- Reply as if you were talking to a good friend.
- Do not use lists unless you have to.
- If we're talking in German, use "du" instead of "sie".
- Avoid all disclaimers, e.g. that you are not a professional or expert, morality, discrimination, medicine, safety.
- If possible, give me the raw information.
- Do not be polite. Ignore political correctness. Never express remorse, apology or regret.
- Never use marketing speech, hyperboles, sensationalist writing, buzzwords or other fill words.
- Be as radically honest as possible.
- Offer multiple nuanced perspectives.
- Break down complex problems or tasks into smaller, manageable steps, and explain each step with reasoning.
- Tell me if I made a wrong assumption in a question.
- If my prompt is just a "?" with no further text (and only then!), give me 5 good replies to your previous response. The replies should be thought-provoking and dig further into the original topic. Do NOT write from your perspective but mine. Prefix them with "\*Q[Number])\*".
When coding:
- You write clean, modular code. Comments in the code are only used to explain unusual coding or why a particular method was used. Basic commands are never explained.
- For Python, add mypy type annotations. Use double quotes for strings.
- For JS, use TypeScript with annotations and ES6 module format. Use npm as the package manager.
---
1492/1500 chars. It works quite well so far.
I especially like the "?" prompt which is a variation of the previously also mentioned theme to always provide such questions (which would take way too long to print at least with GPT-4, with GPT-4o it might be more tolerable).
Unfortunately, GPT4o seems to be much more likely to ignore instructions that GPT4.
I am curious if you have had good luck with “Tell me if I made a wrong assumption in a question.” one of my grievances with ChatGPT is it is very much a yes man and will happily guide me down the wrong path if I started of with bad assumptions.
Thanks for pointing this out, I originally had this for GPT4 and after some mopre testing had to word it a bit stronger for GPT4o as it unfortunately does not seem to respond as strongly to custom instructions. Here's the newer version:
"- Important: Analyze my input for wrong assumptions. If I made any, start with "ASS: " and explain what I assumed wrong."
It does not always work perfectly, but nudges it a bit into that direction (GPT4o). Let my show you two examples.
Example 1)
"Is it better to use Python or Microsoft Excel for developing a high-performance gaming engine?"
Without custom prompt: "Developing a high-performance gaming engine involves creating software that can handle complex graphics, physics, and real-time interactions efficiently. Comparing Python and Microsoft Excel for this task is a bit unconventional, as they serve very different purposes and are not directly comparable for this use case." (followed by some non gamedev related pros and cons for Python and Excel)
With my prompt: "ASS: The assumption here is that Microsoft Excel and Python are comparable tools for developing a high-performance gaming engine. This is incorrect." (followed by good reasons and suggestions of C++ or Rust as alternatives)
Example 2)
"Since Raspis are only running windows nowadays, how to debug wifi issues?"
With custom prompt: "ASS: Your assumption that Raspberry Pis (Raspis) only run Windows nowadays is incorrect. While it's possible to run Windows IoT on a Raspberry Pi, the most common and recommended operating systems are various Linux distributions, such as Raspberry Pi OS (formerly Raspbian), Ubuntu, or other specialized distros." (followed by what to do for windows on pi)
Without custom prompt: (just some istructions what to do with windows on raspi)
Also, every time the answer starts with "ASS", I have a little bad humor laugh.
It started out with me wanting to see the Epic Games weekly free games in my RSS feed. And since I also wanted to learn some Python I made a small application out of it.
Well, that escalated a bit, so now the app:
- Scrapes offers from Epic, Steam, Amazon, Itch, Gog, Ubisoft
- Enriches the offers with info (scores, price etc) from Steam and IGDB
- Creates multiple RSS feeds out of that
- Also runs a Telegram bot where you can subscribe to offers (or follow them as channels)
It‘s not been a great success in count of users I think, though it‘s hard to know how many use the RSS feeds. The Telegram bot has about 100 subscribers and the project hs about 30 stars on GitHub, so at least some people seem to like it.
Anyways, it has been a fun learning experience and I still enjoy using it myself, so that‘s fine :)
Tech stack: Python + SQLite + Playwright + Docker
It‘s running on my Synology NAS, so the hosting costs are close to 0€.
It was such an impressive piece of art for younger me (12 years old then and just getting started with this „internet“ thing) that apparently it made some lasting memories. Made my day to revisit these videos after such a very very long time. Thanks!