AI's impact on artist creativity and productivity
A conversation with Eric Zhou about his research on Generative AI art
This week, my guest was Eric Zhou, a PhD student at Boston University researching the impact of generative AI on art and artists. We discussed one of Eric's recent research project, where he acquired access to a vast dataset of activity on a major online art platform. Eric used this data to assess how adopting generative AI tools impacted both the productivity and creativity of thousands of artists across 18 months, totaling about 4 million artworks.
Eric found that:
After adopting AI tools artists’ productivity dramatically increased (measured as the number of artworks posted), but then slowly decreased over a period of months.
After adopting AI tools the average novelty of content and visuals present in artists’ artwork decreased. Visual homogeneity of AI art is real! But the most novel content actually increased.
AI has a small, positive impact on “equality” on the AI art platform, lifting the bottom performers (consistent with previous research on Generative AI’s impact).
Artists able to make the most of Generative AI tools are those that have unique ideas and use Generative AI as a means for richer expression of those ideas. There are still positive returns to having a point of view. This aligns with the message from AI artist Niceaunties in a previous conversation.
Eric’s next research project will dive further into these findings as well as examine how tradition and AI-artists can coexist.
This is an important topic, and there were some pretty interesting findings. I think you'll enjoy the conversation. This transcript has been edited for clarity.
Eric Zhou, welcome to the podcast.
Thanks so much for having me, James. It's an honor.
I wanted to get started by just asking a little bit about your background and whether you yourself are an AI artist.
Yeah, so my background since college has been pure business. So you can think of like finance, marketing. I ended up doing my MBA at Carnegie Mellon where I specialized in business analytics. And, you know, for me to sort of start taking up research in Generative AI was definitely a big departure from what I was used to. But I think there's a lot of big questions, even before Generative AI that I was always interested in. Things like how can humans and AI collaborate and help overcome cognitive biases, limitations, frictions that we might have that prevent us from making more optimal decisions? And so once Generative AI came about, I think that was definitely a big paradigm shifting technology to understand.
I'm very much interested in how Generative AI will impact society and its potential consequences. And so, naturally, AI art seemed like a pretty fun setting to understand this phenomenon. And I have a lot of friends who actually work in creative fields. They work for game studios, they're working on their own indie video game, or they host art lessons or do commissions and things like that. So I had a lot of inspiration and motivation from the people around me to investigate this, and it definitely felt a bit closer to my life than, say, “Let's use an LLM to automate my writing task.” So it felt more personal. So that's sort of what inspired me to become interested in this field, Generative AI, and its intersection with art.
As for whether I'm an AI artist, it's funny that you ask, because if you asked me this a month and a half ago, I would’ve said, “No, but I'm really interested in becoming one and understanding the touch points in the creative process that this technology will transform.” But two months ago, it was my father's birthday and whenever my brother and I are home and it's someone's birthday or a major holiday, we'll hand draw a card for our parents. But at that time we weren't home, so we had to do everything digitally. And I thought, I don't have that much time this week, let me try using some of these tools to make a card. And so I did, and it turned out fine. We got the message across, but it didn't quite feel like the sentimental expression of appreciation that I was intending.
But then literally two weeks ago was my mom's birthday and she came to visit. So I took the opportunity to actually hand draw the card and it felt very much of a different experience. So honestly, I'm leaning more towards becoming…maybe AI on the side, but, you know, still staying true to the traditional art.
You want to become a non-AI artist, it sounds like.
I want to become a non-AI artist, but understand what is possible with these AI tools.
And how did your father like the AI card? Could he tell it was generated by AI?
Yeah, I mean, I told him and he's always trying to understand what my brother and I are doing. We're both PhD students. So I think in that sense it was quite a nice way of really showing that what I'm doing is tangible and that this is real, right. This is how the future might look, whether for good or bad. But he very much appreciated the effort and the thought that went into it. But for me personally, it felt like a different experience.
Let's transition to talking about your analysis and research paper. You were able to obtain data from an AI art platform, so talk a little bit about the nature of that platform and why AI art platforms are an important part of this Generative AI movement.
I think the reason this is important because a lot of what goes on on these platforms is kind of a social phenomenon, right? How do people react if they're an organic artist to, say, AI artists coming in and showing off their new technology. But we were able to secure some data from one of the large art sharing platforms specifically intended for hobbyists. So there's a diverse set of users producing all kinds of stuff, from short stories to concept art for video games, to people's own original designs, and even to people just messing around in Microsoft paint. So a wide variety of different individuals and artistic talents coming from this dataset.
And it's all visual, like still images. Are people sharing videos and things like that?
I haven't seen any videos. It's mostly still images or even just written passages if it's a short story. But we filtered everything for only the data that was of digital art.
And one more question on the platform. This is a platform where people are looking at photos, liking photos, sharing their own photos. So, as you said, it's for hobbyist, kind of a social media platform of some kind, I guess — And by the way, the reason we're being somewhat vague is there's an NDA in place, so we can't say the name of the platform or give too many details — But just to outline the nature of the platform, this is not something where people are selling their art or anything like that. This is more people appreciating each other's art and liking each other's art and sharing their own art.
Yep.
Okay. How many pieces of artwork were you able to look at, or how many AI artists?
Yeah, so the entire sample that we're looking at came out to about 53,000 users, about 5,800 of which were known AI adopters who published something in one of the subcommunities specifically for AI on the platform. So this came out to upwards of 4 million total artworks. Since we're trying to understand what is the broader impact, I think it was really important that we have these large samples and representative samples of what the creative community really looks like.
So you were able to follow these artists over time to kind of see the impact of Generative AI before and after these Generative AI tools were released. And what were the kind of research questions that you were asking?
So I think the immediate reaction to Generative AI is always, “Oh, what's going to happen to jobs? Who's going to be replaced? What are the consequences?” So a lot of fear. And so we tried to answer big questions that were of interest to the general population.
So first we asked, how is Generative AI affecting humans creative production? And so I think this is an important question because creativity is not typically something we think of where there's a clear objective function or a clear path to go from point ‘A’ to point ‘A’. It's a very open ended thing. And so we want to understand how this technology might help augment humans in this creative process?
The second question is, is Generative AI enabling humans to produce more creative content or not? Of course, there's the debates between pro-AI and anti-AI and both sides have their points and preferences. So trying to at least provide some evidence of what might be some consequences in terms of our ability to express new ideas or is this technology potentially preventing us from exercising our creative talent?
And then our last research question is basically, for whom does Generative AI assist the most? For whom does it help produce more creative and valuable content? And so this is trying to get at, you know, are there potential inequalities or potential baseline skills that are required to leverage this technology to its full potential?
Let's go over those three research questions in a little bit more detail and maybe focus first on the productivity and creativity question, because you were able to follow some AI artists over time. So talk a little bit about kind of the methodology there and how you are able to assess — and we can talk about what the definitions of productivity and creativity are in a moment — but how were you able to kind of do this before-after analysis to assess the impact of Generative AI?
In a typical randomized controlled experiment, you have your treatment group and your control group. The treatment group is people who eventually adopt AI tools at some point, and then your control group are people who never adopt.
So the challenge in this social setting is that people have their reasons for adopting the technology in the first place. For example, maybe their preferred type of content is actually easily automated by AI tools, so they would be more likely to adopt, for example. So one thing that we had to do was employ some econometric causal inference machinery in the background to make sure that our sample was sort of balanced on potential confounding variables that might impact their adoption decision, but also impact their productivity or even their creativity.
What we essentially did was we took treatment and control treatment, each individual could adopt technology at any point in time, and that could be different across users. We basically defined our data within each month. So for anyone who adopted AI in August of 2022 we said, okay, you are labeled as an adopter at that point. And we just tracked their outcome and compared that with the control group over time and quantified the difference. Control group is essentially there to say, okay, here's what would have happened to you — your most comparable individual — if you had not adopted, versus here's the uplift or maybe down downturn that we see in your outcome if you adopted AI. So that's sort of the general methodology we use.
So the data started in January of 2022 and then ended in June of 2023. So it's about 18 months. And some people on the platform were adopting AI technologies at various points over that period. For each individual who adopted, you looked at the three months before they adopted and then the seven months after they adopted. To do this assessment is that the right way to think about the timing.
Yes, that's right.
You also have some charts in your paper showing a timeline of the dataset you have and when different generative AI tools were released. So what are the generative AI art tools that we're kind of talking about here that were released within this timeframe of your dataset?
Yeah. So we had Midjourney version one in, I believe it was February of 2022, and then DALL-E 2 was a couple months after. And then August was the original Stable Diffusion. So these three kind of make up the mainstream state of the art up to this point.
And so we kind of observed primarily adoption of these three tools, but I'm sure there were other lesser well-known tools. I think maybe things like DreamBooth, things like that. But generally, they all work the same. So this is sort of why we focus in on this period where we saw the concentration of the main three tools being released.
And let's go over the particular metrics you were able to look at. People may or may not agree with these definitions. It's hard to quantify what is meant by creativity and novelty and that kind of thing, but I think you did your best, and the definition seemed pretty reasonable. So there's four of them. I'll go over just what they are, and then we can talk about kind of the definitions and what you found.
Creative productivity
Creative value
Content novelty
Visual novelty
So I think productivity is maybe the easiest one to think about. So how did you measure the impact of someone adopting Generative AI on their creative productivity?
We were provided the full history of all the publications on this platform by the users in our sample. So what we could do is simply say, okay, how many things did you publish in month one, month two, month three? And we just took the log of the number of artworks that you publish in any given month. The outcome would be the percentage gains or losses in your productivity over time.
When it came to creative value, this one is a tricky one. We had to read a ton of literature on computational creativity, which basically speaks to, what are the criteria for what makes something creative? The commonly accepted criteria are value and novelty. The literature basically defines value as to what extent is this artifact accepted within the current cultural climate. Again, it's a very subjective thing, subject to cultural trends. We tried our best to capture this via the data that we were given. We said it should be something about how people are reacting to the artwork. So we said, okay, it should be the number of favorites that an artwork receives per view. You know, artworks might receive different exposure at different times. So this is sort of our most objective way of capturing that. You can imagine some issues with that approach.
Then we're talking about content novelty versus visual novelty. So this is an interesting one. And I had to read a bit into philosophy in the art space to figure out sort of what was a proper delineation. There's actually Nelson Goodman's Languages of Art, which proposes this idea of denotation and exemplification. It's actually analogous to the subjects in an art piece versus the physical features that are used to denote that subject or the contents of the image. So naturally, we kind of said, okay, let's try and disentangle this into the idea and the actual visual execution for content novelty.
We can think of content novelty as capturing the idea. And our way of measuring this was we take each artwork, and we use a multimodal model to essentially generate a description for each artwork, and that description should capture the focal contents of each artwork. And then we had to rely on some other literature which says, we can think of creativity as existing in some conceptual space. Now, a conceptual space you can think of as, like, two similar artworks in an x-y axis will be close in distance. And so, naturally, that aligns with how we think of embeddings in the machine learning domain. So we embedded all the descriptions, and we followed this algorithm where we established a baseline period of all artifacts that were published up to that point. And then we just basically use the cosine distance between all those embeddings and everything in the baseline. And then you add on all the embeddings from the consequent period to the baseline, et cetera, et cetera. So it's quite an involved process, but that essentially allows us to recover how similar people's artworks are, on average, to everything that came before.
For visual novelty, we relied on a self-supervised visual representation learning algorithm to essentially just directly embed the images. And then we followed the same process. So there was a lot of legwork that went into getting these outcome variables.
You have a great diagram in your supplementary material describing cosine similarity. You first show two paintings from Andy Warhol's portraits of Marilyn Monroe. These are essentially the same portrait, differing only in the color of the painting. And so because the images are similar, they have a cosine similarity close to one, which is the maximum value. And then you compare the portrait of Marilyn Monroe to some other images showing what the cosine similarity is. And the last comparison is between Marilyn Monroe and a mushroom. And those two things have a cosine similarity close to zero because they're not similar at all. I thought that was a fun demonstration of cosine similarity. So kudos there.
We always try to find ways to make research more fun. It's not always the most fun.
What did you find in terms of the increase in productivity after people started to adopt generative AI tools?
Yeah, basically, in the month that they start using the tool, we see close to a 50% increase in volume of artworks that they publish compared to their pre-treatment levels. So quite a significant jump. But in the one month after their adoption month, we see that spike to 100% gain. So they literally double their productivity. On average, this amounts to about seven additional posts for the average user. But there are some crazy individuals out there producing hundreds, even thousands per month. So we do see this uplift taper off over time, but still to about maybe like a 25, 30% gain over the pre-treatment levels by six or seven months.
And do we know what's happening there? I guess the theory would be someone has discovered an AI tool, so they're experimenting with it, they're sharing their experiments, and then over time, like any new thing, it kind of dies off a little bit and they keep using it, but it's not as novel anymore. Is that like the working theory of what's happening?
Yeah, I guess we didn't really have a working theory, but I definitely think that is sort of what's going on. The novelty and excitement of this new tool, you can create anything that you can put down in words. That's sort of what's driving this initial excitement. But then you have to wonder for these individuals, what is their objective on the platform? Some of them may have just shown up and then AI tools came out and they thought, “Oh, I can make myself more prominent, so let me start producing a ton of stuff.” Whereas you might see other people who are just in it for the flavor of the month, as a hobby, and then they just taper off over time and maybe even exit the platform. So we don't know. But I think these are all possibilities.
The productivity graph you showed is quite striking because you can definitely see that that month they adopt their productive shoots up, as you were saying, to 50% and then 100%. What did you find for the creative value?
Yeah, so value was a bit more of a drawn out effect. So initially we saw a lot of noise around the time of adoption, which suggested some people were appreciated more, but a lot of people were also falling behind in terms of peer appreciation. We sort of saw this for the first two periods, but over time, maybe about four months out, we see it steadily rise. So trending up over time towards the end of our observation period. So it amounts to about a 50% increase in likelihood of receiving a favorite per view by the 7th month, which is quite significant because the average likelihood before any such treatment was about 0.02. So we're talking 0.03, right. So 2% to 3%, quite a significant jump.
So is that somehow saying that their peers are more appreciative because on average their AI generated posts are getting favored more than their non-AI posts before they adopted AI? So they're somehow being appreciated more for their AI art? Is that the right interpretation?
I think this one could have several interpretations. One is certainly subcommunities have formed where people can share and this type of content is well accepted and appreciated.
I think another aspect of it might be the visual fidelity is improving over what they were previously capable of. So if you purely have people who are agnostic to the fact that you use AI and simply just evaluate you based on the visual quality of your artwork, I would venture to say that there is likely an improvement, especially if you're one of those Microsoft paint people before.
Yeah, I've seen some pretty incredible drawings in Microsoft paint, I have to say.
I mean, I have too. [Laughing] I looked through some of these people's works.
So you mentioned some AI communities. When people are posting their AI art on this platform, are they only posting in these AI communities, or are they posting in other kinds of more general communities, and then also maybe posting some in these AI communities? How does that work?
Yeah, that's something we didn't specifically dig into, but I think it's safe to assume that there's a mix of both, because you could easily find some of these AI artworks on the home page of this website, like any sort of art sharing platform.
So let's move on and talk about the content novelty and the visual novelty. Again, some interesting findings. What did you find there in terms of how that novelty increased or decreased after you users adopted these Fenerative AI tools?
So we looked at this from two angles. One was sort of, on average how were people's idea novelty decreasing or increasing over time? And then we also looked at what is their maximum most novel idea? How does that change over time? For the average idea novelty, we find it consistently decreasing over time. So basically this suggests people's ideas are becoming more similar on average. And I think the real kicker here is when we look at their most novel idea, the maximum content, we see a marginal increase, and not necessarily strongly significant, but there's certainly a pretty obvious trend there upward. And this is kind of suggests this technology might be enabling people to explore creative frontiers, but on average, it's sort of resulting in a lot of stuff coming out that is very similar and homogenous.
And is that both for the content novelty and the visual novelty?
So visual novelty is a different story. Both the average and maximum visual novel artifact decreased. So visual homogeneity is definitely a thing. And you can imagine why with these models, there's a lot of pre-trained checkpoints, you know, low-rank adaptations, all sort of tuned for producing systematic visual style. So we could imagine that that would be a big driver of why things end up looking the same.
So how can we put those two ideas together to kind of interpret them? The change in the content novelty and the change in the visual novelty? I'm looking at the charts in your paper right now. So the maximum content novelty, that is, like, the most extreme or weird, let's call it, ideas in the artwork are slowly increasing over time, but everything is just kind of looking the same in terms of the style. Is that right? Or is there a better interpretation?
Yeah, I would say that's pretty fair. Basically, you can think of it as what this technology is doing is it's allowing people to take a creative process that would originally be: I come up with an idea, I sketch it out, I try and execute it visually, I don't like it, I go back, I refine it. And it turns the process into simply just being an exercise of verbal expression and being able to manipulate really interesting concepts in your mind, trying to write that down.
So in that sense, we should expect that this technology facilitates novel idea exploration, but because it's also automating the visual execution. We have less hands-on control over that directly, unless we're getting into the weeds of, you know, ControlNet or all these different add ons, depth-to-image, and so on. So there's a lot that you can do. But I think this is sort of just highlighting what are we seeing in aggregate.
You have this quote in your paper, it's similar to what you were just saying. “Our results hint that the widespread adoption of Generative AI technologies in creative fields could lead to a long run equilibrium, where in aggregate, many artifacts converge to the same types of content or visual features.”
So it's been a little bit since the paper's been published. Do you still think that's true, based on kind of these anecdotes you're seeing? You've used a little bit of the AI tools now yourself, is that kind of where your mind is still at, that these technologies might lead to this equilibrium where everything is kind of homogenous? Or do you think the technologies are changing and adapting with newer versions in a way that might allow them to produce different outputs?
You know, it's funny that you mentioned this, because this is literally the next thing that I'm working on. And we're still working on producing results. But, you know, I think the results from this new paper might hint that there are opportunities to escape this long run equilibrium, where this technology, in the hands of the right people, will be able to chart out new ideas in a creative space such that there's new domains for everyone to explore and really try to dig deep and figure out what is the next interesting concept that I can produce. So I think this paper we’re discussing on this podcast sort of hints at, yeah, it could be a problem. I think coming up later this year, I might have a different answer for you.
I follow a lot of AI artists on Twitter and what you just said really resonates. And I was able to interview in a previous conversation Niceaunties — that's her art name — and she makes these wonderful, very strange out there short videos and images about aunties and auntie culture based on her life growing up with eleven aunties, and she's part of the Fellowship AI collective.
Yeah, there's a lot of really incredible artists doing really interesting things with these AI tools. There's a lot of like, blandness and sameness. But I think it's like anything, there are people who are figuring out how to use these tools in new ways. The tools themselves are improving. They're combining AI tools with traditional digital tools in really interesting ways to make some spectacular art. So I think it will be interesting to follow. And yeah, I look forward to reading that next research paper you're working on.
Yeah, I think you described sort of the phenomenon that we saw with photography and how that might change the portraiture domain. And so with photography, same concerns. Right? Oh, it's replacing the artist it is not really a creative expression. You're not really intentional about what you're trying to convey. But, you know, early photographic processes had a lot of imperfections. And one thing that we did see was the portrait artists sort of took inspiration from those imperfections and decided to try and figure out how to use those imperfections to their advantage. And they sort of arrived at, oh, let's try and represent abstract ideas or emotions, sentimental things, in their portrait work. So in that sense, it spurred a creative evolution of that domain. So I think there's potential for that as well in the AI art space. It's just a matter of whether people are willing to accept AI art as an art form.
Yea, when portraiture first became a “art,” there were debates about whether you should be able to copyright the portraiture, because, as you were saying, some people said there was no creative expression. There's just a person sitting in front of some kind of a scene and you're just capturing that. You're not adding anything to it. So there was this idea that it went against copyright because you weren't really adding any intellectual additions to that piece of work. And we're kind of in the same place now with AI art, where currently you cannot copyright that because it's algorithmic, but we'll have to see how that evolves as well.
I wanted to talk, too, about the second two analyses you did in your paper, because these were able to kind of look at the other questions we had mentioned at the top, which is how are the best versus the average artists taking advantage of these new tools on the platform.
So the second analysis you did was around gains in artwork value. So kind of explain what the methodology there was and what you were looking for and what the results were.
Yeah. So for this analysis, we wanted to understand how does an individual's baseline creativity — so their skill absent of any AI assistant — how does that affect their ability to produce interesting artwork with an AI system, basically to suggest what are some of the underlying skills that might be necessary to succeed with Generative AI? And so we broke this down in two ways.
First, we said, let's bucket all of the AI adopters into quartiles based on how novel their artworks were prior to AI adoption. We did the same thing for how novel they were in terms of their visuals prior to AI. And then we simply looked at how these different tiers correlate with their ability to produce things that are really valuable to their peers. And so basically, what we find is for individuals, regardless of how good they were at producing ideas before AI, so long as they use AI to help them arrive at more interesting ideas, they will be evaluated more favorably.
But once they try to explore visual features, they're not quite as good, which is basically to say, the ideas here are what matter. And so the new creative paradigm is not about how we represent ideas, it's what are we representing? How do we verbalize that, and how do we find these interesting connections between concepts that we're familiar with and produce something that's unfamiliar to us?
Now, on the other hand, when we look at these quartiles based on how novel their artworks were visually before AI assistance, we really only find that your ability to produce ideas matters and so them using this tool to improve their visual fidelity actually doesn't help them. So, again, this is sort of pointing at this direction, the ideas are core here. The verbalization of interesting concepts is core here. This is sort of the key driver of what determines whether you'll succeed with Generative AI in producing artworks versus not.
So you're saying it's really about ideation. How interesting is the content of the artwork? And for some people, how weird or wacky is it? For others, how interesting is it? So that's what's important and what's differentiating artists is that content, it's not the visual representation of those ideas. Did I say it right?
Yeah, yeah, pretty much. So you can intuitively justify that right, because the model handles the visual. So at the end of the day, it's why when we go on Facebook and we see someone posts an AI-generated picture with a million likes, some of us can say that it’s AI generated, right? We can see the stylistic elements, the pattern, they're familiar to us now. The reason that they get a million likes is because they're representing something that is pleasing to people. So it's sort of capturing that type of phenomena. It matters what ideas we're expressing. The visual execution is sort of just a byproduct of that.
I wanted to ask how this compared with some of the other research on productivity and creativity. You mentioned some of these research projects in your paper. So just to name a few. GitHub has shown that there is increased productivity and happiness for coders who are using their Copilot coding platform. Now, this is GitHub telling us that GitHub tools are great. So, I mean, take it for what it is, but that's what their research shows.
Ethan Molluck has worked with some colleagues and shown that consultants with Generative AI tools are able to be more productive. There's some work with writers as well. So there's this kind of convergence, I guess, that AI is helping in some ways, and there's questions that are being asked around, are we helping the lowest performing people in a certain space, the average person, the highest person, and how that distribution is being affected by these Generative AI tools and which class of workers or class of artists or whatever it may be, is benefiting most. Do you want to say anything about how your research kind of aligns with that broader research or not?
So, I mean, the productivity result is obvious, certainly aligns with the current literature. I think there's been some work that is examining the creative potential of large language models for, I guess, ideation or any sort of written creative task, let's call it. And I think they find a similar story where this novelty of the context is decreasing.
I think what sets our research apart, and I think Ethan Malek's paper gets at this a little bit, this idea of, I think it was centaurs versus cyborgs is how they framed it in their paper. Essentially, who is the originator of the idea? Or are we over relying on the technology? There's two very different ways of using AI. One is I myself produced the idea. Now I want to refine it with the assistance of the technology. Versus the second way of using AI, I have no idea and I asked the model for inspiration and then we go from there. Two very different creative processes, all because one was the originator of the idea in one setting versus not in the other.
I think what makes text-to-image AI interesting is because it will always be the individual being the producer of the idea. And so we're kind of seeing how that different process is sort of moderating these improvements of who benefits more from this technology. And we see that it's sort of manifesting through the idea expression.
If we were just to stop at saying all of the change due to AI tools is about productivity we’d be missing something. Sure, it helps everyone accelerate their learning curve. Everyone can now compete at the same level. But we want to break it down by what metric you're looking at, right? Idea versus visual. You could have two different stories there.
I remember when I talked to Niceaunties, she said, yeah, anyone can use these tools. Anyone can copy my work. But do they have a point of view? And I think that's kind of what you're getting at what connects your literature to he broader philosophy of artists, which is they have a point of view, they have ideas, they're trying to express those ideas. And having a point of view is still rewarded with Generative AI tools. It's not going away just because it's easier to make art.
Yep, yep. Totally agree. And I think we can all understand why we might frown upon someone who just puts in some words into Stable Diffusion, gets an output and just posts that on an art platform. It's not intentional. It's sort of just a toy example of what the technology is capable of, but it's not the expression of the ideas that's coming through. It's just a case study at that point.
Before we close, do we want to talk about the third analysis, the impact on equality? You have this kind of equality measure in the paper, comparing the best artists to the more average artist. What do you want to say about that analysis?
So I will clarify. It's not exactly about best to worst. We assume everyone on the platform is competing for favorites, basically. And so originally, without any such intervention of AI adoption or anything like that, we find that favorites are highly concentrated among a few individuals, and it's even more so among AI adopters who have yet to adopt, which is basically to say there are some very select individuals who were dominating this platform.
And what we see after adoption is that it becomes a bit more fair, let's say, and I'm trying to be careful with my words, because equality and fairness are very loaded terms to use to describe what's going on. But basically it's to say that there are people who are now becoming more competitive on the platform, and I think it's signaling towards, yeah, there is a democratization benefit that we might be seeing coming through.
And the reason why we might think this is important is because while there are people who probably never could have functioned on this platform because they could not draw. But now this simple ability — OK, I won't call it simple — this particular ability to express really interesting ideas opens the door to an entirely new segment of individuals who can now compete on this platform. So I think that's actually a very important finding. And intuitively, I think that might be what's happening.
The democratization effect is pretty small, though, right?
It's small, but we did some statistical tests to determine whether this was sort of a product of randomness. So we were basically able to find, that yes, the effect may be small, but it is significant, and it's a step in the direction of “equality.” So, you know, it's a sign there's something there.
And I think that finding also correlates with other research, right? We can think about someone who loves to code but maybe isn’t great at it, or someone who struggles with writing. These generative AI tools are kind of “lifting the bottom.” Again, these terms are loaded, and we don't want to be disrespectful to people, but if we do think about things in terms of skills of some kind, it does seem like the lower skilled people are able to kind of benefit from these tools, and it's raising the bottom, so to speak. Does that agree with what your findings were?
I think in terms of the, the third analysis, I would say so is.
Is there anything we haven't touched on in your paper that you wanted to call out?
Yeah, I think one thing that is worth thinking about and looking into is the actual process that people are following. And you can imagine that there's a lot of diversity in how people’s approach to producing artwork or using Generative AI to produce artwork. There's certainly going to be a lot of people like how I described, they're pretty indiscriminate about what they post. They just put in the words, they get the output, they post that, and they sort of have thousands and thousands of posts on their profile that are just these simple, one-off AI-generated images.
But I think what is certainly more interesting is the people who really use these tools to their fullest extent. So really using like, ControlNet, in-painting, all these add-ons and technologies that have been modified for use in a stable diffusion pipeline that would really signify, yeah, we have something special here. This paradigm shift of the creative process is real and these are sort of the prime examples of creative expression with Generative AI.
And that's sort of why we propose this term, “generative synesthesia.” There are people out there who, they have these really abstract ideas that they might not be able to express directly, but through the assistance of this collaborative process with text-to-image, they might be able to really dig deep and exploit some of these ideas that they have and eventually be able to represent those visually. And I think that's sort of where the frontier will be mapped out, and I think that is where my research agenda is trying to head towards next.
Yeah, it's interesting because these tools started as kind of one-off, standalone text-to-image tools. But as you alluded to, the pipeline is quickly changing. So Adobe Photoshop, Adobe illustrator now have a lot of these abilities built in, right? So you can do text-to-image within these tools. You can do in-painting, you can do all kinds of stuff. So it kind of tightens the workflow and allows people to operate very collaboratively between traditional workflows and these new AI workflows. And we're starting to see the same thing with video. A Premiere Pro update was released by Adobe and has Sora technology built in for b-roll and all kinds of other Generative AI tools.
I think as the technology gets integrated more into these existing tools, the pipelines are going to change as well. And I think it will have an impact on the ideation phase, as you were talking about earlier in the general creative workflow. So it will be interesting to see who can take advantage of these tools.
We've touched on your future research agenda a little bit, but what do you want to say in closing there in terms of what's next for you and what questions are top of mind?
Yeah, so what's next for me? Get the second paper out, investigating who is expanding the creative frontier, and to what extent do we actually see this idea space expansion facilitated by the release of new generative tools? And so we can kind of think of this as piggybacking off of the maximum content novelty result where we left it as an open question that we were curious about. Who is driving this? Is it people of particular talents? And how are they driving? So what are they exploring? So this is going to be the next piece.
I think more broadly, I'm really interested in understanding how human and AI artists can coexist, because there's necessarily this competition underlying the two. And, you know, my third piece of work will sort of investigate how is this impacting labor market competition? How can organic artists, say, differentiate themselves from AI artists such that they can still succeed, gain employment, develop a niche, while leaving the AI people to their devices?
So I'm looking forward to embarking on those projects. And the overall message from me is that I hope that we can treat Generative AI and any such innovations as a potential tool for human flourishing and not as a threat to prevent us from expressing ourselves or ruining our livelihoods. The technology is here to stay. It's important we find ways to accommodate that in the best way possible for as many people as possible. So, yeah, that's the message that I want to share.
Yeah. Thanks for that. That was well put, and I can't wait to follow your research agenda. Those are really interesting questions, especially the non-AI artists versus AI artists and how the two can coexist and thrive together. It's a really interesting question. I think a lot of people have that question on both sides of the fence, so can't wait to see that research come out.
Eric. Zhou, thanks for being on the podcast.
Thanks so much for having me. It was a pleasure.