AnimeSuki Forums

Register Forum Rules FAQ Members List Social Groups Search Today's Posts Mark Forums Read

Go Back   AnimeSuki Forum > Anime Related Topics > Fan Creations

Notices

Reply
 
Thread Tools
Old 2024-01-31, 10:34   Link #141
Renegade334
Sleepy Lurker
*Graphic Designer
 
 
Join Date: Jul 2006
Location: Nun'yabiznehz
Age: 38
*casts necromancy spell of doom using the blood of Magic: the Gathering players a virgin as tribute to the eldritch deities of the netherworld

Boy, it's been a while, hasn't it? A lot of my stuff here has disappeared thanks to ImageShack and PhotoBucket changing their business models and screwing free users. Won't reupload those; I'm just here to dump some Stable Diffusion art I've done lately. I don't want to clutter the AI thread with those and these are original art rather than bound to specific series (like what I've recently posted in the Code Geass image thread).

Everything is in either sized at 1024x1536px or 1536x1024px because the maximum setting for Hires upscaling I've successfully achieved so far is 2x, for a base picture in 512x768px or 768x512px. I might try one day with 2.5 to go up to 1280x1920px or 1920x1280px, but I'm not holding my breath over here. My aging potato PC can only do so much and I'm not counting on anyone donating the latest RTX card (and a better motherboard, with better ram and a better CPU, plus SSDs and PSU to go with those T_T) to me (nor do I intend on investing on that side; current times call for saving that money for other, more pressing purposes).

TL;DR…
software details
Sorry; dynamic content not loaded. Reload?

1536x1024px:



1024x1536px:



The last pic doesn't live up to the others' quality standards and level of detail because the Lovecraftian monster in the background was, in its original state, too gremlin/goblin-like for my tastes rather than Cthulhu-ish. I eventually made some inpainting to generate a different monster while preserving the human character as she was, but, hey, Hires fix wouldn't let it upscale it to 1024x1536px. Ultimate Upscale for Automatic1111, on the other hand, obliged me, but without the added detail and img2img redraws of vanilla Hires fix. Oh well.

NSFW:
NSFW
NOT SAFE FOR WORK
Sorry; dynamic content not loaded. Reload?
-- You can find the whole Imgur album with each image's prompt here: Part 1 / Part 2


Keep checking this post for later additions. I don't think it'll be a regular occurrence, as I'm treating my use of Stable Diffusion as an occasional hobby...and I don't want to turn it into a chore (I've had enough SD crashes already). I only started using SD in a rather desperate attempt to resolve a composition issue I couldn't convincingly do with Photoshop, and that foray also led me to a dead end. But the rabbit hole I flung myself into does have its moments of fun, which is why I'm prolonging this little adventure...and adopting a more sedate pace.

As always, feel free to nick whatever you want, just don't use it for commercial purposes.

Oh, and: I'm not taking any requests. Sorry, but Stable Diffusion generation is finicky enough as it is. This application doesn't pump out gems all the time and is therefore rather time-consuming, requiring a lot of adjustments, small editing (inpainting and after-detailing) and lots of batch generation before you come across what is essentially a happy accident. A beautiful coincidence. In many cases, as some will find out, Photoshop still needs to come to the rescue because SD will not cooperate any further and will give you even more egregious visual aberrations instead (ADetailer for hands, which I've started to use more often, has been kinda hit or miss, too).
__________________
<-- Click to enter my (moribund) GFX thread.

Last edited by Renegade334; 2024-03-05 at 03:35. Reason: 1 image + link to gallery part 2 added (FIXED) (2024/02/07)
Renegade334 is offline   Reply With Quote
Old 2024-02-07, 20:37   Link #142
Renegade334
Sleepy Lurker
*Graphic Designer
 
 
Join Date: Jul 2006
Location: Nun'yabiznehz
Age: 38
I'm posting this, after deciding to wind down my use of Stable Diffusion. I cannot deny that this strange journey was quite fun in its own way; much of the gratification came from accidentally stumbling upon this happy visual coincidence that occurs after generating ten or twenty artistic aberrations, exercises in blandness or outright horrors. It's a bit like playing at the casino, really. I nevertheless can't hide from the truth: it's a time-consuming pastime and there always comes a time when one just arrives to the following conclusion, "eh, time for something else; hopefully something more constructive". It was about time, too: real life has been rather burdensome lately (my mother's health concerns, bad weather, life getting more expensive). In the past couple of days, I even started deleting more than half of my diffusion models, figuring that my PC would appreciate the extra free storage space.

”Adelante, amigos!



I'm sure you'll recognize the characters and the series they belong to. Don't be surprised by the high number of female characters - the vast majority of LoRA models at Civitai, Pixai and Huggingface are of the female persuasion.

Funnily enough, I started my foray into visual AI because a friend of mine asked me to Photoshop together a rather complex picture, which he'd use as cover for a fanfiction of his - and I struggled with the task (there were two pics/renders that refused to work together, so adamantly so that I even considered using Blender to alter the source material and forcefully resolve the issue). So I turned to Stable Diffusion to see if it couldn't cut this Gordian knot for me...and I faceplanted even harder. The results were even worse. But despite that resounding failure, I was impressed with some of my first experiments and I ended up falling into a strange rabbit hole. It's now time for me to pull myself out of it and walk away. For a while, I think.

...As an impromptu parting gift...remember those NSFW pictures I posted in the Code Geass sub-forum? Well, I've been juggling different diffusion models to see how their outputs differed from one another despite using almost identical values (seed numbers, sampling and hires steps, etc.) as input parameters. I simply ran the same CG pics' prompts through a more visually intricate, aesthetically stringent and watercolor-oriented model that has been yielding me some real gems lately. Consider this - for the time being - my AI swansong.

(MOSTLY) NOT SAFE FOR WORK: (Now sorted by character to better keep track of what needs to be reupped)
NSFW
NSFW
Sorry; dynamic content not loaded. Reload?

I have...other...images like this that are EVEN more unsafe, but I'm pretty sure the forum rules won't let me post them here, so I shall stay on the right side of things and abstain.

I don't think I'll be updating this post with new stuff the way I did with for the one above (if you haven't already, do check it out - you might've missed some additions!), but I'm in the middle of cleaning out my folders and pruning out unneeded stuff, so who knows. And let us allow this thread to fall back into a well-deserved torpor, once again.

Spoiler for ever-ballooning changelog + FINAL UPDATE ON THE MATTER:


EDIT - Imgur, the ever-so-frustrating hoster everyone likes to love, has a permanent grudge against these two sets of pictures, so I'm uploading them on a different image host (Imgbox), same as for the Kusanagi Motoko picture above. I just don't want to keep reupping these all the time. They barely last two days.
NSFW
you know the drill: content is _not_ safe for work
Sorry; dynamic content not loaded. Reload?
EDIT two - I've reupped for nostalgia's sake most of my old signatures here - beware, though, some of them are actually above the forum's size limit:
Spoiler for antediluvian stuff - do people still use forum signatures, anyway?:
__________________
<-- Click to enter my (moribund) GFX thread.

Last edited by Renegade334; 2024-06-06 at 14:17. Reason: Replacing dead links + reorganizing them to better identify dead counterparts + FINAL UPLOADS (2024.02.25)[collection:complete]
Renegade334 is offline   Reply With Quote
Old 2024-04-12, 22:06   Link #143
Renegade334
Sleepy Lurker
*Graphic Designer
 
 
Join Date: Jul 2006
Location: Nun'yabiznehz
Age: 38
Some stuff I've been generating in my free time on weekends. Everything's in 1920x1280px or 1280x1920px.




^-- Special mention must be given to the basketball-themed pair...which almost drove me insane with their VRAM crashes at the upscaling phase. No matter what settings I used in the startup parameters OR in the tiled VAE and hypertile tabs, Stable Diffusion kept faceplanting. I had to resort to img2img to upscale the base image, which...IMHO, isn't as ideal as a good old Hires.fix even if others keep telling me Hires.fix is img2img. The black and white version is actually the original one, gotten through the well-known Meinamix diffusion model, but After Detailer inpainted a different face style. The more colorful one was rebuilt through the Re:imagination model, with the help of ControlNet.

And for some reason, SD has been having more trouble fashioning accurate hands than before, which is rather perplexing. Adding "disfigured" (with other keywords like "mutated", "distorted", "mutilated" or "malformed" thrown in for good measure) to the negative prompt helped...somewhat, but Jesus Christ...

And here below is what I called the "Beach Girl series/collection" - some personal experiments on prompt grammar and syntax variation (basically take one prompt and modify it little by little, see how wildly the output changes). The results were both interesting and perplexing (there were some unexpected deviations at times), and the process highlighted certain weaknesses and idiosyncrasies of the diffusion models I'm currently favoring. Generating multiple distinct characters without the use of LoRAs OR regional prompting was a tad tricky, but I found that combining "group of (x)" and "multiple (x)" with weights did the trick, especially when working with canvases set to landscape orientation (768x512px).





TL;DR…
you know the drill - same stuff, just enhanced with Photoshop filters
Sorry; dynamic content not loaded. Reload?

Cheers.
__________________
<-- Click to enter my (moribund) GFX thread.

Last edited by Renegade334; 2024-05-16 at 01:33.
Renegade334 is offline   Reply With Quote
Old 2024-05-14, 04:50   Link #144
Renegade334
Sleepy Lurker
*Graphic Designer
 
 
Join Date: Jul 2006
Location: Nun'yabiznehz
Age: 38
Small dump of a Karen Kouzuki-centric prompt run through different models (Re:SONATE_Original, Re:IMAGINATION_Original, Re:LAXATION_SideA, Re:KINDLE_Empathy, Meinamix v11). All in 1280x1920px.

Pet peeves:
- I wanted a corn field and got sunflowers instead.
- I asked for a yellow tank top and got a white one instead.
...



TL;DR…
same as above, just Photoshop-filtered
Sorry; dynamic content not loaded. Reload?

Still on a hiatus but keeping an eye on SD-related updates. Software version got bumped to 1.9.3 not too long ago and boy, did the generation times for ControlNet-hinging prompts explode. Not sure what they did, but it feels like the wait time doubled. Cheerio.
__________________
<-- Click to enter my (moribund) GFX thread.

Last edited by Renegade334; 2024-05-16 at 02:08.
Renegade334 is offline   Reply With Quote
Old 2024-05-16, 02:07   Link #145
Renegade334
Sleepy Lurker
*Graphic Designer
 
 
Join Date: Jul 2006
Location: Nun'yabiznehz
Age: 38
Just tinkering...

(Again, in 1280x1920px)


TL;DR…
same, just with Photoshop filters
Sorry; dynamic content not loaded. Reload?
__________________
<-- Click to enter my (moribund) GFX thread.

Last edited by Renegade334; 2024-05-17 at 02:03.
Renegade334 is offline   Reply With Quote
Old 2024-06-17, 16:52   Link #146
Renegade334
Sleepy Lurker
*Graphic Designer
 
 
Join Date: Jul 2006
Location: Nun'yabiznehz
Age: 38
Sup.
Haven't been up to much lately, I'm afraid, but here goes.




TL;DR…
same-o, but spritzed up with Photoshop filters
Sorry; dynamic content not loaded. Reload?

The last two (the dining couples) were a nightmare - an unexpected hiccup that serves as a prickly reminder that, sometimes, it's just the simplest and stupidest things that can make you spectacularly faceplant into a solid brick wall.
What do I mean? Through trial and error, I perfected my personal Hires fix upscaling workflow on Stable Diffusion that would allow me to bump a 768x512px base image to a 1920x1280px, wallpaper-level, high-res version. It wasn't easy figuring it out, but until recently I thought I finally had the process down pat. Then I ran into repeated crashes while trying my luck with the second "dining couple" image (the one with the blonde) - and, no matter what I did, Stable Diffusion just kept dying on me. I tweaked startup parameters that I hadn't touched in months (which should've been a sign to me that that part was okay), modified sampler and hypertile settings, juggled with upscaler tile sizes...no dice. I thought a recent Stable Diffusion update (possibly the recent spate of ControlNet updates) had finally broken my software and prevailed over my slowly but surely increasingly obsolescent hardware, permanently locking me out of this (now very occasional) hobby - more specifically, the high-res side of it.

Then, a week later, just as I was about to permanently throw in the towel, the answer to this vexing enigma came to me completely by accident: my negative prompt was too long. THAT was what was causing the process to devour an excessive amount of memory and yeet itself into a death spiral.
*sigh*
Even now I still feel like kicking myself - it's such a dumb mistake that I never seriously considered it as a big failure point. I thought my woes were just caused by excessive requirements of VRAM due to inadequate tiling dimensions.

But no. I had to whittle this:
((badhandv4, easynegative, ng_deepnegative_v1_75t, verybadimagenegative_v1.3)), (worst quality, low quality:1.4), watermarks, signature, missing limbs, twisted limbs, disembodied limbs, disconnected limbs, extra limbs, disfigured, mutated, malformed, mutilated, mangled, deformed, two left hands, two right hands, three arms, three forearms, three hands
Down to this:
((badhandv4, easynegative, ng_deepnegative_v1_75t, verybadimagenegative_v1.3)), (worst quality, low quality:1.4), watermarks, signature

And, suddenly, I'm back in business - more or less. I actually had to use img2img to brute-force the upscaling...and bought myself another round of trial and error I really didn't need. The "Ultimate SD Upscale" script feature finally did the job, but the results are...hrrrrmblmrlmngrrrrr... *more seething Renegade334 noises*

You want to know the worst about it? The insult after the injury? The end (upscaled) result wasn't even that good - the blonde girl's head is too big compared to the rest of her body. So much trouble for a lackluster result. Still, I'll leave it here as a reminder (to myself and others) that sometimes, all it takes is a small and silly grain of sand for everything to come crashing to a halt.

EDIT: added another pic now that Stable Diffusion is amenable to me again, but bl***y hell, all the hoops I had to jump through to generate/fix proper hands both within SD and with the help of Photoshop...! Not to mention SD seems to have all the trouble in the world to properly form ballerina/ballet shoes - for some reason, they always came out as clogs with massively protruding, brick-like heels...and accompanied by the occasional anatomical horror. *shiver*
__________________
<-- Click to enter my (moribund) GFX thread.

Last edited by Renegade334; 2024-07-02 at 08:38. Reason: Added two more images :) (2024.06.19)
Renegade334 is offline   Reply With Quote
Old 2024-06-19, 15:12   Link #147
Renegade334
Sleepy Lurker
*Graphic Designer
 
 
Join Date: Jul 2006
Location: Nun'yabiznehz
Age: 38
I knew this morning had been progressing too smoothly...
I added one image to my previous post (see above, of course) - something that was supposed to be a bit Bloodborne-ish in inspiration (the main character actually had a hood, but Stable Diffusion mutated it into long hair - an unexpected but not wholly unpleasant surprise) but turned into a tableau of its own. The Hires.fix upscaling process went without a hitch as I had hoped (and After Detailer's face-refining module did a much better job than expected), but then, while uploading the final result on Imgur, I noticed that the image seemed...smaller than intended. I went back to Automatic1111's UI and, true enough, Hires.fix's upscaling ratio was a stock 2 instead of 2.5 - which resulted in a 1024x1536px image instead of my customary 1280x1920px.

And a little thing called "OCD", deep inside of me, simply. Wouldn't. Let. It. Go.
I had to fix it using img2img's Ultimate SD Upscale. And so, here's the 1280x1920px version:



I'll leave it here to showcase that, even though the 2.5x version is just an img2img upscale of the 2x original, Stable Diffusion still takes the initiative to add more details the moment it realizes it has more real estate to toy with. And since there is a randomness factor baked into the process, you'll never have the exact same, pixel-by-pixel image twice.

I'll edit this post to add any future diffusions (none for now, but who knows - I feel...adventurous this afternoon).
__________________
<-- Click to enter my (moribund) GFX thread.

Last edited by Renegade334; 2024-06-19 at 15:25.
Renegade334 is offline   Reply With Quote
Old 2024-07-01, 08:52   Link #148
Renegade334
Sleepy Lurker
*Graphic Designer
 
 
Join Date: Jul 2006
Location: Nun'yabiznehz
Age: 38


TL;DR…
Same as above, but Photoshop-filtered
Sorry; dynamic content not loaded. Reload?

1. I don't know why it's snowing inside the chalet. I just...*shakes head*...just live with it, shall we?
2. The Vietnam scene was the trickiest of the lot - by a wide margin - because SD is absolute pants at understanding what a "Bell UH-1 Iroquois helicopter" is, but Microsoft Copilot's take on Dall-E, on the other hand, is a lot more competent in that regard (probably because it's also hooked to an online search engine). So...I downloaded Microsoft's output and imported it in SD as a ControlNet reference image (with lineart_realistic set as the checkpoint model)...but then SD completely mangled the rifle the infantryman was holding...so I had to go hunt for a reference image/model of a Colt M16A1 on the Internet. Eventually, I found a royalty-free 3D scene (a .gltf file) file that I opened in Windows 10's 3D Viewer, reoriented to fit the picture's context, exported a picture of it then transplanted it into SD's upscaled image using Photoshop. Yeesh. All the hoops you have to jump through to get your stuff just right...
3. The two Vietnam war variants were created after receiving some feedback from a friend. Between the original and the second take, there are at least three differences. Care to spot them?
4. Yes, I am revisiting old diffusions of mine (see previous posts in this thread) because it feels like the latest SD updates resulted in quality drops and an uptick in memory-related upscale crashes. So far, my findings are a mixed bag (I did find some new diffusions that had tessellated backgrounds, as if made of coarse bokeh...which I don't think I noticed before), but it could just be personal bias tinting my judgment.
5. All of it is in 1280x1920px (portrait) or 1920x1280px (landscape), as usual.

For those interested in AI creation and, more specifically, the workflow and refinement (Hires.fix + After Detailer) process:
TL;DR…
 
Sorry; dynamic content not loaded. Reload?
EDIT: added one more image above plus two Vietnam war pic variants.
__________________
<-- Click to enter my (moribund) GFX thread.

Last edited by Renegade334; 2024-07-03 at 08:13. Reason: more images added (2024.07.02)
Renegade334 is offline   Reply With Quote
Reply

Tags
renegade

Thread Tools

Posting Rules
You may not post new threads
You may not post replies
You may not post attachments
You may not edit your posts

BB code is On
Smilies are On
[IMG] code is On
HTML code is Off

Forum Jump


All times are GMT -5. The time now is 10:12.


Powered by vBulletin® Version 3.8.11
Copyright ©2000 - 2024, vBulletin Solutions Inc.
We use Silk.