Low Hanging Fruit

Get ready for A.I. to eat the bottom half of many fields.


In 2017, I called a meeting at an agency I was working for titled "A.I. and its impacts on creative work". The meeting was to discuss the latest developments in artificial intelligence and how it could impact various aspects of what creative agencies do for their clients. I believed at the time that within 10 years, A.I. would dramatically alter how we did creative work. After presenting some interesting developments and highlighting specific concerns, I opened the meeting up to the floor to see what others had to say. Almost all of them were either in complete disagreement about my estimated timelines or unconvinced A.I. could ever take over responsibilities traditionally performed by creative professionals. When the meeting was over, I remember thinking how odd it was that the discussion was so asymmetrical. Me on one side, everyone else on the other. I thought "Maybe I got this one really wrong".

In turns out we were all wrong: this is happening faster than any of us thought possible.

In 2022 alone, DALL•E, Midjourney, Stable Diffusion and ChatGPT have been either released or rapidly updated. The impacts of these technologies have been immediately felt and yet the total implications are still not fully understood. For instance, there are reports of Stack Overflow users answering questions with ChatGPT to artificially boost their rankings. Two hours before writing this article, I was building photo-realistic images of people using Stable Diffusion 2.0. We're at the beginning of this revolution and we're already seeing disruption.

How this will impact creatives

At the rate things are progressing, most asset artists in the near future will do little more than fix A.I. generated art. Midjourney can create sweeping vistas and ornately decorated rooms in minutes. It can make thousands of iterations in the time it would take for a single talented artist to make one. Graphic artists are going to feel the pain as well. A.I. will not displace the top artists anytime soon, but the long tail of that distribution will lose considerable work. As an example, websites such as this one often require artwork or photos to break up the text. In the past, I would have sought out freely licensed photographs or possibly paid a graphic artist to create some unique artwork specific to the article. Now I simply use Stable Diffusion locally on my machine. The results are sufficiently interesting and creative enough to spice up the site. I don't need to hire an artist.

In a few years, most digital art software will have A.I. generation tools integrated directly into the GUI. Artists will simply say "build me a red chair with ornately styled legs circa 1860 France" and the app will produce 50 versions of their request. Then they make a few tweaks. If you're a motion graphics house, do you need 5 artists or just one?

How this will impact knowledge workers

Here are some tasks I fed ChatGPT that it performed nearly perfectly:

  • I asked it to write a cover letter as a fashion designer with 15 years of experience.
  • I asked it to write an email to a prospective client and gave it details about my business.
  • I asked it to guess the ICDN10 codes for a patient with a hurt left shoulder.

Knowledge work will be disrupted heavily by A.I. If you move information around, or you job is to combine and aggregate information, A.I. will be handling a lot of this in the near future. Careers that convert research into prose will be heavily disrupted. I foresee a lot of assistant roles in Academia and research replaced by software.

How this will impact developers

My deepest fear is for junior developers just out of college. ChatGPT and Github Copilot are able to produce high quality code with alarming frequency, however, they are prone to serious flaws or errors. As an example, using Github Copilot, I was able to produce an entire section of an API call without any prompt other than the method name. Everything about it was well done, down to the way it handled exceptions. However, it called a needed method by an incorrect name. I spotted it immediately, and even if I had not, my tests would have caught the error. A junior developer that does't write tests would probably not have been able to see the issue.

This is a big problem from multiple angles:

  • New developers are able to skip obtaining muscle memory of writing code and leaning on A.I. to produce most of the grunt work. This prevents valuable learning.
  • Inexperienced developers often produce inefficient or incorrect code, but are guided through mentoring or review via pull requests; if an A.I. produces comparable code, why would I hire junior devs?
  • Most companies with money on the line will not allow A.I. generated code to go to production without human oversight, but that now means companies are incentivized to hire a few senior engineers and no junior/mid developers. This reinforces the problem of junior/mids not getting much needed mentoring and ability to grow into senior roles.

How this will impact non-technical people

Until there is a socially-accepted method to verify the authenticity of media, all images within a few years will be suspect. Currently, A.I. produces just enough artifacts and oddities in images for some experts to detect them, but that will be fixed in future releases, at which point there will be no turning back. Any person who has a single public image of themselves will be easily used as an input image for A.I. generation and potentially the victim of a deepfake. Here's the grim scenarios I foresee:

  • Teenagers (or worse) will use photos of their favorite potential partners at school to produce deepfaked porn. This will also be used for bullying.
  • The concept of "fake news" will seem quaint; any shady politician will be able to claim anything that is legitimate is A.I. generated and a large portion of the population will believe it. On the flip side, politicians will be able to make even more egregious negative ads of opponents by creating altered photos and footage.
  • The use of writing for any educational assessment or test is officially dead. Essays can be written on any subject within seconds. Students can simply ask A.I. to do a first pass, then alter a few a lines here and there to "give it their voice". Antiquated educational systems will need a complete overhaul. At the minimum, testing will need to be done in tech free zones where literally no wearable or phone is allowed in the room.
  • Actual CSAM will be mixed heavily with generated CSAM. This added noise will make it far more difficult to find and prosecute child predators.

Limitations and conclusions

For how amazing the demos have been, some are talking about A.I. as if it is truly intelligent or far more advanced than it really is. It makes bad mistakes. It hallucinates frequently. It gives the illusion of advanced intelligence, but fails at simple tasks. What does that mean in practice? It means that tools like ChatGPT will still do amazing things, but they will not replace humans in mission-critical settings anytime soon. No one in a medical setting will allow A.I. to make a final decision without human supervision.

But the pattern is obvious from this article: the low hanging fruit is gone. A lot of jobs will go away because the people who perform them are introducing errors at the same rate or higher than the A.I. or the job they do is easily accomplished by the narrow features of A.I. A lot of these careers will disappear because they simply aggregate information up to someone making a decision. The final decision maker will stay employed, but most below will be replaced by software.

This is what we need to acknowledge and be prepared for: many jobs will be gone. Careers won't disappear entirely, but some will lose their bottom half. If you aren't in the higher brackets of your field, you will find yourself in competition with software for your job.


Sebastian Wildwood

Dec. 11, 2022