Get ready for A.I. to eat the bottom half of many fields.
In 2017, I called a meeting at an agency I was working for titled "A.I. and its impacts on creative work". The meeting was to discuss the latest developments in artificial intelligence and how it could impact various aspects of what creative agencies do for their clients. I believed at the time that within 10 years, A.I. would dramatically alter how we did creative work. After presenting some interesting developments and highlighting specific concerns, I opened the meeting up to the floor to see what others had to say. Almost all of them were either in complete disagreement about my estimated timelines or unconvinced A.I. could ever take over responsibilities traditionally performed by creative professionals. When the meeting was over, I remember thinking how odd it was that the discussion was so asymmetrical. Me on one side, everyone else on the other. I thought "Maybe I got this one really wrong".
In turns out we were all wrong: this is happening faster than any of us thought possible.
In 2022 alone, DALL•E, Midjourney, Stable Diffusion and ChatGPT have been either released or rapidly updated. The impacts of these technologies have been immediately felt and yet the total implications are still not fully understood. For instance, there are reports of Stack Overflow users answering questions with ChatGPT to artificially boost their rankings. Two hours before writing this article, I was building photo-realistic images of people using Stable Diffusion 2.0. We're at the beginning of this revolution and we're already seeing disruption.
At the rate things are progressing, most asset artists in the near future will do little more than fix A.I. generated art. Midjourney can create sweeping vistas and ornately decorated rooms in minutes. It can make thousands of iterations in the time it would take for a single talented artist to make one. Graphic artists are going to feel the pain as well. A.I. will not displace the top artists anytime soon, but the long tail of that distribution will lose considerable work. As an example, websites such as this one often require artwork or photos to break up the text. In the past, I would have sought out freely licensed photographs or possibly paid a graphic artist to create some unique artwork specific to the article. Now I simply use Stable Diffusion locally on my machine. The results are sufficiently interesting and creative enough to spice up the site. I don't need to hire an artist.
In a few years, most digital art software will have A.I. generation tools integrated directly into the GUI. Artists will simply say "build me a red chair with ornately styled legs circa 1860 France" and the app will produce 50 versions of their request. Then they make a few tweaks. If you're a motion graphics house, do you need 5 artists or just one?
Here are some tasks I fed ChatGPT that it performed nearly perfectly:
Knowledge work will be disrupted heavily by A.I. If you move information around, or you job is to combine and aggregate information, A.I. will be handling a lot of this in the near future. Careers that convert research into prose will be heavily disrupted. I foresee a lot of assistant roles in Academia and research replaced by software.
My deepest fear is for junior developers just out of college. ChatGPT and Github Copilot are able to produce high quality code with alarming frequency, however, they are prone to serious flaws or errors. As an example, using Github Copilot, I was able to produce an entire section of an API call without any prompt other than the method name. Everything about it was well done, down to the way it handled exceptions. However, it called a needed method by an incorrect name. I spotted it immediately, and even if I had not, my tests would have caught the error. A junior developer that does't write tests would probably not have been able to see the issue.
This is a big problem from multiple angles:
Until there is a socially-accepted method to verify the authenticity of media, all images within a few years will be suspect. Currently, A.I. produces just enough artifacts and oddities in images for some experts to detect them, but that will be fixed in future releases, at which point there will be no turning back. Any person who has a single public image of themselves will be easily used as an input image for A.I. generation and potentially the victim of a deepfake. Here's the grim scenarios I foresee:
For how amazing the demos have been, some are talking about A.I. as if it is truly intelligent or far more advanced than it really is. It makes bad mistakes. It hallucinates frequently. It gives the illusion of advanced intelligence, but fails at simple tasks. What does that mean in practice? It means that tools like ChatGPT will still do amazing things, but they will not replace humans in mission-critical settings anytime soon. No one in a medical setting will allow A.I. to make a final decision without human supervision.
But the pattern is obvious from this article: the low hanging fruit is gone. A lot of jobs will go away because the people who perform them are introducing errors at the same rate or higher than the A.I. or the job they do is easily accomplished by the narrow features of A.I. A lot of these careers will disappear because they simply aggregate information up to someone making a decision. The final decision maker will stay employed, but most below will be replaced by software.
This is what we need to acknowledge and be prepared for: many jobs will be gone. Careers won't disappear entirely, but some will lose their bottom half. If you aren't in the higher brackets of your field, you will find yourself in competition with software for your job.
Sebastian Wildwood
Dec. 11, 2022