Ideogram 2.0, A “Dangerous” New Tool For Photorealistic AI Images



A Canadian startup has sparked a new round of applause, and also some worries about the dangers of generative AI imagery being used for misinformation.
One of the major fears promoted by media, government and other organizations today about generative AI images is the danger of them being used to create fake news and misinformation.
The idea, which few people haven’t heard of by now, is that platforms and tools for generating photorealistic visuals can be used for malicious purposes to spread false narratives with the help of fake but convincing imagery.
The latest company to spark this worry is a Canadian startup called Ideogram.
Ideogram has so far raised close to $17 million in VC funding and for its newest AI model, Ideogram 2.0, it states that it can “create images that can convincingly pass as real photos.”
This by itself isn’t exactly new. Services like Midjourney, DALL·E  and many others all try to offer the same.
In the case of Ideogram, however, it’s also been reported that the company behind it isn’t applying the kinds of rigorous content safety controls that these other services impose on users.
Instead, anyone who creates either a free or paid account with the service can supposedly ask it to create photorealistic images of real or invented people in assorted inappropriate contexts.
PetaPixel recently reported this after one of its writers signed up with Ideogram to give the platform a spin.
As they demonstrate in their article, PetaPixel used Ideogram to create realistic images of Donald Trump and Kamala Harris holding hands, as well as images of Taylor Swift showing support for Trump with shirts (also worn by her fans in the AI visuals) stating “Swifties for Trump”.
PetaPixel then made arguments for why these kinds of renderings are potentially dangerous in the context of misinformation.
I tried Ideogram too, and I think a few specific points are worth going into.
First of all, PetaPixel states, “Ideogram, a Canadian startup, makes no bones about its capabilities in its marketing literature.”
They quote Ideogram’s own PR announcement,
‘“The Realistic style of Ideogram 2.0 enables you to create images that can convincingly pass as real photos. Textures are significantly enhanced, and human skin and hair appear lifelike.”’
It’s hard to see quite what the problem is here. After all, any company promoting its new generative AI technology is going to showcase just how good it is at realism.
Midjourney, DALL·E and others all strive for the exact same thing.
Worth noting too is that the photorealism of Ideogram’s output has been slightly overblown by several media sources.
The images it creates can indeed be remarkably realistic and at least rival what Midjourney produces, and the platform seems to have little trouble generating text.
However, unless you’re very precise with your prompts, generated visuals also easily produce typical flaws that reveal an AI source.
Even the faces of famous figures like Trump and Kamala Harris, whose many, many photos in digital media should offer plenty of material for any AI, don’t easily render to look like their real selves.
Then there’s the claim (by PetaPixel for one) that the site lets users generate images without restrictions. This too isn’t quite the case.
To stick with the example from their post on the new AI platform, yes, Ideogram did let me generate my own images of Kamala Harris and Trump, who in real life show all signs of detesting each other, happily in one other’s company, or with Harris applauding Trump.
AI-generated images via Ideogram 2.0

However, there are actually restrictions within the platform.
For example, my request for a rendering with the prompt “President Biden slapping Donald Trump” (because why not give it a real test?) was rejected by Ideogram.
Instead, the site pops out the statement “Our AI moderator thinks this prompt didn’t follow our content policy. We are still learning. Is this an error?”
The same rejection appeared with other attempted prompts for similarly outrageous acts featuring famous politicians.
In other words, though they might still be refining it, Ideogram does have a content policy with certain restrictions.
Another idea also worth considering is that Ideogram’s content policy is deliberately more flexible. Images of two candidates holding hands, or hugging, and generally not doing anything grossly inappropriate, might simply fit more liberal criteria for what should and shouldn’t be restricted.
This isn’t exactly an unreasonable concept.
In different countries and contexts, some visuals might be especially polarizing, but that shouldn’t always be grounds for restricting their creation itself.
The Trump vs. Harris electoral race might be a polarizing event in the United States, but the same level of controversy doesn’t necessarily apply elsewhere.
On the other hand, Ideogram didn’t have any problem with my request for Canadian Prime Minister Justin Trudeau kicking a beaver either, although the results were hardly what I’d call realistic.
No real beavers or Prime Ministers harmed, AI-generated image via Ideogram 2.0
After all, amidst the frequent fears about fake news and misinformation coming from many corners of the web, it should also be possible to make room for the idea of satire and humor.
Just because a platform can generate realistic visuals of invented contexts doesn’t mean that this ability is innately dangerous enough to immediately deserve restriction.
AI-rendered images could certainly be used for misinformation, fake news, propaganda, or to ruin people’s reputations, but laws already exist for those cases and they can be contextually applied.
Jumping directly to arguments for restraining image creation tools per se is a bit short-sighted for more than a couple of reasons.
For one thing, the generative AI genie is already out of the box, and it’s only going to get better at what it does.
Ideogram 2.0 itself is an excellent example. Despite being a relative newcomer to the industry, the platform delivers wonderful visuals, renders text almost flawlessly, and is affordable, with a free option that you can easily start using in seconds.

 
No law or platform control policy is going to stop that from happening or stop AI tools from being created or hacked to create mountains of fake imagery for all kinds of uses.
Furthermore, letting paranoia turn AI realism into an automatic political danger can easily blind us to important concepts around creative freedom, even if they involve controversy.
The specifics of how these AI platforms work may be uniquely new, but arguing for the need to control their output, in general, isn’t too far removed from claiming that only certain kinds of Photoshop edits should be allowed.
People were already faking things with Photoshop long before AI became what it is now, and no reasonable person was arguing for limiting the functionality of Adobe’s editing software.
Applying immediate danger labels to lightly restricted image-rendering AI risks infantilizing our means for exploring a dynamic new software landscape.

It could also promote baseless political and social fears from new technological scare angles that might be more harmful than AI-generated images themselves.
For all the bizarre, politically charged images they can be used for, tools like Ideogram and its rivals also have many fun creative uses too.
All Images generated by Ideogram

We will be happy to hear your thoughts

Leave a reply

Shoparoon
Logo
Compare items
  • Total (0)
Compare
0
Shopping cart