Re: AI Generated Imagery
Posted: Mon Aug 22, 2022 8:16 pm
MidJourney is surprisingly bad at two people.
That is not dead which can eternal lie, and with strange aeons bring us some web forums whereupon we can gather
http://garbi.online/forum/
A synthetic media artist named Jason Allen entered AI-generated artwork into the Colorado State Fair fine arts competition and announced last week that he won first place in the Digital Arts/Digitally Manipulated Photography category, Vice reported Wednesday based on a viral tweet.
Allen used Midjourney—a commercial image synthesis model available through a Discord server—to create a series of three images. He then upscaled them, printed them on canvas, and submitted them to the competition in early August. To his delight, one of the images (titled Théåtre D'opéra Spatial) captured the top prize, and he posted about his victory on the Midjourney Discord server on Friday.
Allen's victory prompted lively discussions on Twitter, Reddit, and the Midjourney Discord server about the nature of art and what it means to be an artist. Some commenters think human artistry is doomed thanks to AI and that all artists are destined to be replaced by machines. Others think art will evolve and adapt with new technologies that come along, citing synthesizers in music. It's a hot debate that Wired covered in July.
There's also the fairness element since it isn't clear if Allen told the judges about his use of image synthesis, though some Twitter users have reportedly contacted the judges and discovered that they didn't know. Curiously, the art was considered good enough to fool human artists, and someone on Twitter joked that it settled the debate over "whether AI art is art."
Max Peck wrote: ↑Wed Aug 31, 2022 5:25 pm AI-wielding artist wins first place at Colorado State FairMade me think the artist was synthetic.A synthetic media artist named Jason Allen...
Really nice! I've only gotten to play with these a little bit, basically the 20 free uses that midjourney gives. I'm experimenting with using it to generate book covers. I was pretty pleased with what I got with my brief experience (favorite is the image below) but I wasn't managing to get anything like some of those pics you got. Any tips on giving prompts?
Smart is understanding the artist wasn’t synthetic.Jaymann wrote: ↑Wed Aug 31, 2022 6:14 pmMade me think the artist was synthetic.Max Peck wrote: ↑Wed Aug 31, 2022 5:25 pm AI-wielding artist wins first place at Colorado State FairA synthetic media artist named Jason Allen...
AI-generated artwork is incredibly popular now. It’s now possible to generate photorealistic images right on your PC, without using external services like Midjourney or DALL-E 2.
Stability AI is a tech startup developing the “Stable Diffusion” AI model, which is a complex algorithm trained on images from the internet. Following a test version available to researchers, the company has officially released the Stable Diffusion model, which can be used to create images from text prompts. Unlike Midjourney and other models/generators, Stable Diffusion aims to create photorealistic images first and foremost — something that has already led to controversy over “deepfake” content. However, it can also be configured to mimic the style of a given artist.
Stable Diffusion is unique because it can run with a typical graphics card, instead of using remote (and expensive) servers to generate images. Stability AI recommends using NVIDIA graphics cards right now, but full support for AMD and Apple Silicon is in the works.
It helps to look at the community showcase to see how people are using descriptors like "cinematic lighting" or "photography," etc. I'm assuming you've searched for and read the brief manual explaining things like weighting (I don't do that much) and setting the stylize or chaos commands. But to be honest, I tend to just keep making variations and rerolling until I get something closer to what I want. That image is really cool, but it also shows why the community has been wearing out the --testp command for photorealism because it kills the "Midjourney style" of adding noise that works for detail like foliage or clouds but makes a mess of buildings.
Yeah, Midjourney's new photorealism mode goes straight to deepfake, like it or not. As long as you persist until it gets the eyes right, it's pretty convincing.Max Peck wrote: ↑Sun Sep 04, 2022 9:33 pm Stable Diffusion Brings Local AI Art Generation to Your PC
Alright, just to be clear, this isn't like Artbreeder. I'm not giving it uploaded images to reference. Had a hard time deciding which way to go with this. With time, I could probably get something closer to the composition of a scene from the movie, but here's what I thought might serve as a little spread of the range of things Midjourney can do right now, not counting the more fantasy and stylized stuff I was showing in my D&D-flavored post above.
AI image generation is here in a big way. A newly released open source image synthesis model called Stable Diffusion allows anyone with a PC and a decent GPU to conjure up almost any visual reality they can imagine. It can imitate virtually any visual style, and if you feed it a descriptive phrase, the results appear on your screen like magic.
Some artists are delighted by the prospect, others aren't happy about it, and society at large still seems largely unaware of the rapidly evolving tech revolution taking place through communities on Twitter, Discord, and Github. Image synthesis arguably brings implications as big as the invention of the camera—or perhaps the creation of visual art itself. Even our sense of history might be at stake, depending on how things shake out. Either way, Stable Diffusion is leading a new wave of deep learning creative tools that are poised to revolutionize the creation of visual media.
Related (sort of) but I follow lots of indie RPG developers on DriveThru and one of them sent out an email today saying they just lost a bunch of clients because of the ability to use AI Generated Imagery in various RPG products. Given what I've seen shared here, I can believe it. It's cool but it never occurred to me how it might be impacting artists specifically l until I saw that email today. Crazy, crazy times.
In the past few years, art made by programs like Midjourney and OpenAI’s DALL-E has gotten surprisingly compelling. These programs can translate a text prompt into literally (and controversially) award-winning art. As the tools get more sophisticated, those prompts have become a craft in their own right. And as with any other craft, some creators have started putting them up for sale.
PromptBase is at the center of the new trade in prompts for generating specific imagery from image generators, a kind of meta-art market. Launched earlier this summer to both intrigue and criticism, the platform lets “prompt engineers” sell text descriptions that reliably produce a certain art style or subject on a specific AI platform. When you buy the prompt, you get a string of words that you paste into Midjourney, DALL-E, or another system that you’ve got access to. The result (if it’s a good prompt) is a variation on a visual theme like nail art designs, anime pinups, or “futuristic succulents.”
You can install Stable Diffusion locally on your PC, but the typical process involves a lot of work with the command line to install and use. Fortunately for us, the Stable Diffusion community has solved that problem. Here’s how to install a version of Stable Diffusion that runs locally with a graphical user interface!
Starting today, we are removing the waitlist for the DALL·E beta so users can sign up and start using it immediately. More than 1.5M users are now actively creating over 2M images a day with DALL·E—from artists and creative directors to authors and architects—with over 100K users sharing their creations and feedback in our Discord community.
Responsibly scaling a system as powerful and complex as DALL·E—while learning about all the creative ways it can be used and misused—has required an iterative deployment approach.
Since we first previewed the DALL·E research to users in April, users have helped us discover new uses for DALL·E as a powerful creative tool. Artists, in particular, have provided important input on DALL·E’s features.
Their feedback inspired us to build features like Outpainting, which lets users continue an image beyond its original borders and create bigger images of any size, and collections—so users can create in all new ways and expedite their creative processes.
Learning from real-world use has allowed us to improve our safety systems, making wider availability possible today. In the past months, we’ve made our filters more robust at rejecting attempts to generate sexual, violent and other content that violates our content policy and built new detection and response techniques to stop misuse.
We are currently testing a DALL·E API with several customers and are excited to soon offer it more broadly to developers and businesses so they can build apps on this powerful system.
We can’t wait to see what users from around the world create with DALL·E. Sign up today and start creating.
https://twitter.com/MalwareArtThe time of malware generated AI art is upon us, with the first images I've spotted rivalling that of the great HR Geiger in both scale and complexity. Not only has the Twitter user who brought about this unholy union of AI malware art, Greg Linares, made some impressive snapshots of the questionable latent space in image form, they've also used their skills to design a malware generated, cyberpunk themed music album—and it actually slaps.
Though just as the throngs of past wars have inspired great artists to create their masterpieces, so too do the malware artists of our time draw from some dark inspiration.
Linares goes by Laughing_Mantis on Twitter, pulling together their malware-based AI art designs under a dedicated Malware Art profile. Each piece curated here, Linares notes that the art is generated on a heavily modified local version of Stable Diffusion v1.4 and a separate 2.0 install. Basically, he uses "strings and text from inside malware, as well as filenames, and other usable meta data in order to drive the AI art prompts."
It was Linares' glorious collection of images generated using malware from the 'Sandworm group' that caught my eye in the early hours of this morning, the debugging information and test folders for which, Linares explains, were littered with Dune references—hence the Sandworms. These sinuous, cable-draped creations tower over humanity in mist-filled scenes of imminent demise, mouth agape, and hungry for flesh.
Meet Loab, the AI Art Woman Haunting the Internet: I discovered this woman, who I call Loab, in April. The AI reproduced her more easily than most celebrities. Her presence is persistent, and she haunts every image she touches. CW: Take a seat. This is a true horror story, and veers sharply macabre.
Are there ghosts in our machines? Well, of course not, but a recent viral Twitter thread might have you believing there is something sinister lurking behind your computer screen, just waiting to be unleashed.
On Sept. 6, the internet was introduced to "Loab," an apparently AI-generated "woman." The internet promptly began calling her "the first cryptid of latent space," "creepy," a "demon" and "a queer icon." There's a lot going on here, so let's explain.
First, to understand Loab, you need to understand what's happening in AI art.
Some artists have begun waging a legal fight against the alleged theft of billions of copyrighted images used to train AI art generators and reproduce unique styles without compensating artists or asking for consent.
A group of artists represented by the Joseph Saveri Law Firm has filed a US federal class-action lawsuit in San Francisco against AI-art companies Stability AI, Midjourney, and DeviantArt for alleged violations of the Digital Millennium Copyright Act, violations of the right of publicity, and unlawful competition.
The artists taking action—Sarah Andersen, Kelly McKernan, Karla Ortiz—"seek to end this blatant and enormous infringement of their rights before their professions are eliminated by a computer program powered entirely by their hard work," according to the official text of the complaint filed to the court.
Alex Champandard, an AI analyst that has advocated for artists' rights without dismissing AI tech outright, criticized the new lawsuit in several threads on Twitter, writing, "I don't trust the lawyers who submitted this complaint, based on content + how it's written. The case could do more harm than good because of this." Still, Champandard thinks that the lawsuit could be damaging to the potential defendants: "Anything the companies say to defend themselves will be used against them."
To Champandard's point, we've noticed that the complaint includes several statements that potentially misrepresent how AI image synthesis technology works. For example, the fourth paragraph of section I says, "When used to produce images from prompts by its users, Stable Diffusion uses the Training Images to produce seemingly new images through a mathematical software process. These 'new' images are based entirely on the Training Images and are derivative works of the particular images Stable Diffusion draws from when assembling a given output. Ultimately, it is merely a complex collage tool."
In another section that attempts to describe how latent diffusion image synthesis works, the plaintiffs incorrectly compare the trained AI model with "having a directory on your computer of billions of JPEG image files," claiming that "a trained diffusion model can produce a copy of any of its Training Images."
During the training process, Stable Diffusion drew from a large library of millions of scraped images. Using this data, its neural network statistically "learned" how certain image styles appear without storing exact copies of the images it has seen. Although in the rare cases of overrepresented images in the dataset (such as the Mona Lisa), a type of "overfitting" can occur that allows Stable Diffusion to spit out a close representation of the original image.
Ultimately, if trained properly, latent diffusion models always generate novel imagery and do not create collages or duplicate existing work—a technical reality that potentially undermines the plaintiffs' argument of copyright infringement, though their arguments about "derivative works" being created by the AI image generators is an open question without a clear legal precedent to our knowledge.