Yeah I've been using the same iterative process using img2img. Using AI removes most of the toil (tedious masking and colour matching and relighting images) involved in this kind of photo manipulation work. As these tools improve it will be interesting to see what professional artists can do with it.
I've been recreating the 50 worst heavy metal album art using AI as well, currently at 30. Recently I've found Stable Diffusion plus DALL-E inpainting to be a good combination.
Plus there is no reason why someone couldn't build a specialised AI model to do vectorisation and another to generate simplified versions of vectors.
People are already doing by combining DALL-E 2 with gfpgan for face restoration. So there may be a role in understanding how to combine these tools effectively.
Here's a trick for this use Google image search. That will give you various descriptions of similar images. Then use those descriptions in your prompt.
Some similar examples below but you will need to engineer the prompt a bit more to get it exactly the same.
There are designers that appreciate the distinction between the way you design a desktop application vs a consumer website / mobile app. A lot of the design literature and discussion seems to be focused on the latter these days.
A lot of things that desktop applications did are missing in most web apps; high information density, resizable panels, panel layouts (side by side etc...), keyboard shortcuts, embedded CLI's, and so on. It makes the user experience a lot worse for power users.
Most designers happen to serve mostly common customers but there are countless highly used industry specific web applications out there that are navigated primarily by shortcuts, and that software too, has designers. Designers that from my experience are often trying to make B2B, high data density solutions into fluffy UIs despite clear feedback that it's not what users want.
It depends who that common customer is. If you're building a tool for industry and your users are electrical engineers, you would design it differently to a consumer mobile app is all I am saying.
Yeah I've been having fun with it recreating bad Heavy Metal album art (https://twitter.com/P_Galbraith/status/1548597455138463744). It's good, but surprisingly difficult to direct it when you have a composition in mind. A few of these I burned through 20-30 prompts to get and I can't see myself forking up hundreds of dollars to roll the dice.
My brother is a digital artist and while excited at first he found it to be not all that useful. Mainly because it falls apart with complex prompts, especially when you have a few people or objects in a scene, or specific details you need represented, or a specific composition. You can do a lot with in-painting but it requires burning a lot of credits.
You use a web app to interface with it. Agree going from DALL-E 2 to Midjourney is pretty painful. Hopefully Midjourney create a web UI for it like OpenAI/Craiyon.
Posted video example here https://twitter.com/P_Galbraith/status/1564051042890702848