[ Home ] [ all ] [ liz / b / meta ] [ Rules / FAQ / Search / News ]

/b/ -Burrow

Bury yourself in the ground lizzie
Name
Email
Subject
Comment
Flair
File
Embed
Password (For file deletion.)

File: 1692041974555.png (1.43 MB,1412x850,AI crap.png)

 No.7155

Anything related

  No. 7157

File: 1692044222848.png (1.36 MB ,1000x1298 , 9RxDYwK.png )

I don't have enough data to make Poppy speak properly.

BUT VA played in many cartoons/anime dubs.
Listened to compilation of Kate Higgins voices, one in Monster High sounded almost the same (and some others)

It doesn't sound like her most of the time, but it did in that comp.
This shit will take so long to make. (6hours series + 10 movies)

Definetely will try to sell it when I am done lol

  No. 7158

File: 1692058210582.jpg (24.1 KB ,459x480 , 1551862034715.jpg )

I'm losing my fucking mind trying to get stable diffusion to work.
For some reason venv generates a version of python that is not compatible with pytorch and it's making me insane. I wish I knew how to configure it so that every time a virtual environment is generated it uses the one I got installed (3.10), but I didn't find anything now.
It's also fairly late

  No. 7160

File: 1692078877331.jpg (111.33 KB ,591x662 , animegirls.jpg )

>>7158
Bahahaha it sounds like your python, pytorch or miniconda crap is not installed on C / root drive.

I've been through this and losing mind part too. That's why I laugh, thought it was just me.

Reinstall everything related to Python then install SD from scratch or delete venv folder and run setup.

You should have no issues after that, but if you do. I will make step by step guide. Just post errors

  No. 7161

>>7160
I wouldn't be too sure. Running whereis python3 shows me that I have it installed in these two folders:
/usr/bin/python3
/usr/lib/python3
I'm not too sure what it is, but I think there must be something wrong with the ./webui-user.sh file

#!/bin/bash
#########################################################
# Uncomment and change the variables below to your need:#
#########################################################

# Install directory without trailing slash
#install_dir="/home/$(whoami)"

# Name of the subdirectory
#clone_dir="stable-diffusion-webui"

# Commandline arguments for webui.py, for example: export COMMANDLINE_ARGS="–medvram –opt-split-attention"
export COMMANDLINE_ARGS="–recision full –no-half"

# python3 executable
python_cmd="python3"

# git executable
#export GIT="git"

# python3 venv without trailing slash (defaults to ${install_dir}/${clone_dir}/venv)
#venv_dir="venv"

# script to launch to start the app
#export LAUNCH_SCRIPT="launch.py"

# install command for torch
export TORCH_COMMAND="pip3 install torch torchvision torchaudio –extra-index-url https://download.pytorch.org/whl/rocm5.4.2"

# Requirements file to use for stable-diffusion-webui
#export REQS_FILE="requirements_versions.txt"

# Fixed git repos
#export K_DIFFUSION_PACKAGE=""
#export GFPGAN_PACKAGE=""

# Fixed git commits
#export STABLE_DIFFUSION_COMMIT_HASH=""
#export CODEFORMER_COMMIT_HASH=""
#export BLIP_COMMIT_HASH=""

# Uncomment to enable accelerated launch
#export ACCELERATE="True"

# Uncomment to disable TCMalloc
#export NO_TCMALLOC="True"

###########################################

Personally I don't think I didn't install it right. But I can't be sure really, the main problem is, what I wrote before really:

INCOMPATIBLE PYTHON VERSION
This program is tested with 3.10.6 Python, but you have 3.6.15.

  No. 7163

File: 1692140026516.jpg (119.44 KB ,712x500 , 6jfms0.jpg )

>>7161
>/usr/bin/python3
wait a minute, that looks like nothing Michaelsoft Binbows
*linux flashback intensifies*

Oh hell no, I remember fighting for hours while making bootable Windows iso to runaway from this system for gigabrains and it's constant errors.

>#!/bin/bash

>########################################################
Aah, so it's like that, huh. I understand everything now.

>INCOMPATIBLE PYTHON VERSION

Man, that's some rookie stuff. Just change what it should use.
https://chat.openai.com/share/5b7212ad-26c1-4833-9527-b15e251c74d7
but for real, I have no idea what I'm looking at. Just guessing. You are on your own Lizzie

  No. 7164

File: 1692142091064-0.png (2.1 MB ,1331x1196 , Poppy in drop pod.png )

File: 1692142091064-1.png (1.11 MB ,1298x592 , WHAT ARE THOOOOSE.png )

I know nothing of linux, just joking.

This is my favorite, Poppy in Drop Pod from Deep Rock Galactic.

I forgot that IMG2IMG is shite and it lacks something.
As effect colors get dull with each generation and on higher resolution black blops.

The only retouch that I made was in paint (like 2minutes for chair and boot).
Rest is Stable Diffusion
I will make 3440X1550 when I fix the issues, will be gorgeous!

  No. 7166

>>7165
What a punchable face

  No. 7169

File: 1692292059042.png (1.58 MB ,1720x768 , 00097-862847854.png )

OH YEAH, I figured it all out
now check this out!

I have some things that I want to correct but for today it's enough.
Had many laughs while doing.

  No. 7170

WHOOO BOYYYY
I GOT THIS TO WORK ON UBUNTU (RIP OPENSUSE)
HOW DO I TRAIN THIS TO MAKE GIGANTIC UDDERS THAT MAKE ME CUM BUCKETS NOW

  No. 7171

File: 1692482190932-0.png (828.62 KB ,640x960 , 00698-3711777755.png )

File: 1692482190932-1.png (850.62 KB ,640x960 , 00697-3711777754.png )

>>7170
>HOW DO I TRAIN
Oh you wanker, I will beat your ass for not reading replies on the slowest imgboard ever.
>>7146 >>7147 >>7148

Don't forget to put .vae into models>vae folder. Otherwise you will have shitty colors and artifacts like I did.

I don't think that you need training that much.
Unless you are some freako with weird waifu.

>CUM BUCKETS

Easily achievable, you are going to have loads of fun.
It took me 4 minutes to set prompt and settings. No LORA, just model.

  No. 7172

File: 1692484248583.png (749.05 KB ,640x960 , 00006-3711777757.png )

Didn't write it clear, I meant writing prompt, on abyssorangemix3 model and hitting generate.
So as you can see, it already looks top tier. You can retouch weird ai fails in inpaint function.
Honestly it shocked me how well it looks. Generated 6 and all stunning. First time tried to make anime lol

I was curious how my LORA would act. unfortunately the quality of yordle art is nowhere near, so my dataset can't make it that sharp and natural. Lack of blue skin ppl in ai models makes huge difference too :d

I ran it with the same prompt.
Don't like it at all, it's better at cute art.
(Except headpats, shit's always dark and terrific, like before-crime scene)


Also LIIIIIZMIN, spoiler image doesn't work.

  No. 7173

>>7171
OK OK
So I got it to work, manage to make some stuff. I actually found a character model for the two I wanna work with, but now I got some questions.
How do I set the character models? Can I increase the resolution of the art (I got a 6800 XT for reference)? It seems like it takes too much VRAM, but can't I allocate it somewhere else? What are the best settings to have?
Sorry if this is a bit much, I just feel a lot like a kid in a videogame store.

  No. 7174

Note: It takes too much VRAM when increasing the resolution of an image to generate

  No. 7175

File: 1692495003415.png (1.45 MB ,1024x1024 , first of many.png )

Apparently I was able to make it work with a square resolution.
But most of the images I'd rather make are different than that.
Take this as a token of gratification

  No. 7176

File: 1692495930289-0.png (386.45 KB ,768x336 , 00402-1195893998.png )

File: 1692495930289-1.png (391.14 KB ,768x336 , 00363-3145949797.png )

File: 1692495930289-2.png (333.78 KB ,768x336 , 00364-3145949798.png )

>>7173
you go into models folder, create new named Lora and put these files there.

Now to make them work:
1. U need a word that calls your Lora, author should write somewhere what is it.
2. Additonally you need activate it, below generate button there is icon with sun and landscape(in the middle), click it, go to Lora.
If you don't see them then just refresh. Click choosen lora. It should look like this <lora:whatevertheycalledit:1> in your prompt now.
:1 stands for strenght of it, author should write which value works best.
That's all

>resolutions

When making image, you don't aim for high resolution at first. You go for 512 pixels at max, you can go up to 764 and 1024 but it's better to first find prompt and seed that you like on lower ones because it's quicker.
Now when you find decent one and want to have it in higher resolution.
(that's why it's worth turning on option to make txt file to each image generated, it's in webui)
There are two ways:
Upscaling with hires fix (most often used) - don't know much yet, seek guides
Img2img

>not enough vram

Also your batch size may be too big.
You need to test how much your vram can handle. Batch means how many imgs are made at the same time (it's faster than generating one at the time)
After getting not enough vram info, IT OFTEN STAYS INSIDE after.
Then you need to dump the memory from with webui afaik or restart console.
I just close it and run again every time.

>allocate vram somewhere else

I wanted to do that too.
You can't, vram is fast af and nothing comes close - in theory U can but it's so impractical that nobody did it.

>Sorry if this is a bit much,

Nah, all good. I was literally eagerly waiting for you to reply hahaha

I think that covers the basics. Rest you can find on reddit most often although info gets outdated pretty fast. Training used to take so much time and resources, currently it's in a pretty good spot.
also if you bought your gpu recently I would return it and go for some used nvidia as they are better optimized for AI.

Remember it's best to use tags from danbooru.
If my great prompt give out shit I just start over. Tries and many tries.

>best settings

Well it's matter of prefernce. I just run 512 by 512orLower, you adapt steps to resolution.
I use DPM++2M Karras on 30 steps.
Examples in the pic: 768x336. I will upscale them in the future

  No. 7178

File: 1692498792305.png (5.09 MB ,3680x1600 , 98C 208F on memory holy sh….png )

>>7175
AWWW I'm proud of you, thanks!
Well just change the height, or width and no more squares :DDDD


Also you motivated me to try upscaling.
So I just read and best method is what I described previously by IMG2IMG and "resize by".
You keep trying as high as possible till you run out of vram.
Then scroll down and at the bottom click script->SD upscale->choose your scale factor->R-ESREGAN 4X+ Anime6B

My room got so fucking hot reached 98*C /208F. Got kinda scared opened window wide and turned up fans to vacuum cleaner noise levels.
Turned out these temps are fine but I will get additonal fan.

THESE DETAILS THO, except weird hand and arm going into dress. But that can be easily fixed with inpainting.
info: used 3090 24gb, scale factor 2, 7min 4sec
My psu has gimmick of showing how much it draws: 460W

  No. 7179

File: 1692499649363.png (1010.44 KB ,1024x1024 , 00000-2961104887.png )

>>7176
>also if you bought your gpu recently I would return it and go for some used nvidia as they are better optimized for AI.
Ehh I trust that the technology will eventually reach diminishing returns and AMD and Nvidia will be at the same ground eventually. Bought it more than 3 months ago and it was from a refurbish anyway.

The main issue that I seem to be getting is this one:

torch.cuda.OutOfMemoryError: HIP out of memory. Tried to allocate 4.50 GiB (GPU 0; 15.98 GiB total capacity; 6.82 GiB already allocated; 4.53 GiB free; 11.36 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_HIP_ALLOC_CONF

>Loras

I did eventually follow a youtube guide that helped me set up Loras so that was good.

Did try what you said about upscaling, but now my girl has no teeth :cold_face_emoji:

Also, how would I make the legs from here?

  No. 7180

Also, to make this image, I had to use a Lora for an artist that I don't have, but because I didn't have it, it didn't use the artstyle this artist would use. I can use multiple loras without conflict?

  No. 7181

>>7179
>AMD and Nvidia will be at the same ground
I hope for that too

Had that issue.
If I remember correctly u need to edit webui.py
Third answer:
https://github.com/AUTOMATIC1111/stable-diffusion-webui/issues/6460

>granny smile

Oh haha, when stuff like that happens you can go to img2img inpainting and regenerate certain part.
If it's fighting it then cut smile from previous good image and img2img resize to right scale with low denoising, paste it. There might be lines, just deal with them by inpainting again.

>making legs

Outpainting for that
There you go: https://stable-diffusion-art.com/outpainting/

>multiple loras

They work on the same canvas so they will mix in together. So your artstyle will change.
You could play around lora strength to see if your character and style stays consistent.
If not then generate first with char. lora then send to img2img and run with style.

Oh you butterfingers, I can see it
>>7171
>Don't forget to put .vae into models>vae folder
Go get it, colors won't be dull

  No. 7182

oh dhis is givving me he cemichals in the brain i can't tipe
*UUUUUUUUUUUUUUUUUUUUUUUUUUUUUUUGHHHHHHHHHHHHHHHHHHHHHHHH*

  No. 7183

File: 1692552810266.png (25.23 KB ,192x166 , 5f2439a6f9e2cb758be415d8d8….png )

Man, how do I use an artstyle Lora? I found an artist that I followed a lot recently and this is making me shiver with how good the results can possibly be.

  No. 7184

File: 1692569141875.jpg (67.86 KB ,621x642 , 3a6edf06e36331d26099fb79cf….jpg )

>>7183
The same way you use normal lora.
I wrote it previously too.

Ouch my feelings

  No. 7185

>>7184
Sorry I have an attention defecit disorder.

  No. 7186

>>7185
Oh then no need to be sorry, everything is fine :D
I just assumed something else.

you go into models folder, create new named Lora and put these files there.

Now to make them work:
1. U need a word that calls your Lora, author should write somewhere what is it.
2. Additonally you need activate it, below generate button there is icon with sun and landscape(in the middle), click it, go to Lora.
If you don't see them then just refresh. Click choosen lora. It should look like this <lora:whatevertheycalledit:1> in your prompt now.
:1 stands for strenght of it, author should write which value works best.
That's all

  No. 7187

File: 1692617684522.png (257.48 KB ,512x512 , 00000-2836307600.png )

>>7181
when using inpainting, should I try to generate the whole thing again? or just an attribute? Not this but in another image for example the face came out realistic and I wanted to get rid of that, besides another minor quirks. How would I go about doing that? I highlight what I want to paint over and then just write "smile, face"?
In the end I fixed the issue by unticking "Restore Faces"

  No. 7188

>>7187
Just attribute.
Exactly as you write.

Your image is 512x512 and face takes small part of an image.
So inpainting may struggle to make it right with low pixel count.

Restore faces just "saves" previous face/expression when running through img2img.
It's not for fixing face, I thought that too at first.
I only use it when the resolution is big and face seems perfect.

Did you try other images ratios?
Like horizontal and vertical.
You just change size in settings, it often makes generation more lively and interesting when compared to 1:1

  No. 7196

File: 1692686277062.jpg (39.42 KB ,567x557 , 1665331353960388.jpg )

>>7188
I did eventually configure my GPU so that it can run these kind of images now.
I am also figuring out that the Noise Volume modifies images look when using img2img. Which makes making hybrids of loras easier to understand, but still not much easy to apply.
But having tons of fun with this, I just installed an app on my phone that let's me run SD remotely from my PC to my phone, although the options you get in them are quite limited. Can't upload the results I got for some reason

  No. 7204

File: 1692814770270.webm (1.67 MB ,720x720 , b95737345c749ea75d080dbbb….webm )

imade my firstntuabledrawingUugwaaaaaaaa

  No. 7231

File: 1693971700483-0.png (1.54 MB ,1056x1113 , asdgfdhgdhf.png )

File: 1693971700483-1.png (1.59 MB ,1440x1048 , 00080-3770623158.png )

Feel no more need to fiddle with settings for LORA it's satisfactory for me.
Now I feel quite proud but also empty.
Idk what to do now

  No. 7232

>>7231
How about making a doujin?

  No. 7234

>>7232
Great idea! Thank you

Going to be a huge pain in the ass to make AI do exactly what I want.
I don't think that creating new one from scratch without any drawings or 3d modeling will work.
Will test how it goes on basing on some existing one.

  No. 7235

File: 1694082015708.jpg (265.37 KB ,1116x967 , lol.jpg )

>>7232
hahaha that loss of detail in last panel
Most likely due to low pixel rate, it was 1424x528 but not high enough for face.

It took quite a bit to make, most time of to recolor skin as it doesn't work as it needs a correct base color lmao

Using 3d model and editing software would make it look pretty nice and for sure faster

  No. 7236

>>7235
That looks phenomenal

  No. 7276

File: 1696528332195.png (567.39 KB ,720x512 , 01075-2597429106.png )

SO CUTE

  No. 7280

File: 1696546308352-0.png (899.66 KB ,760x536 , 106603629.70500003_image.png )

File: 1696546308352-1.png (968.09 KB ,760x536 , 106671115.89000002_image.png )

File: 1696546308352-2.png (490.66 KB ,720x512 , 01191-469877468.png )

File: 1696546308352-3.png (791.83 KB ,513x663 , 85764927.08_image.png )

ahahah so thick, I don't know what's up with it.
It either makes them flat or gives huge milkers.
Even if I blacklist words

  No. 7281

>>7280
You tried ComfyUI?

  No. 7282

>>7280
Perfectly thick. It is as things should be.

  No. 7283

Hmm I falsely thought that Stable Diffusion understands concepts and activities.
Thought it had some understanding on it's own. But it's just dumb mashing shit up that looks good.

>>7281
Not yet , but that's on my radar.
Next up are OpenPoses and Dynamic Prompt.
After learning these extensions I'm going to jump on blender, make poses and set light.

Can't wait till POV option gets better and multiple character generation is fully supported.
LEMME HUG MY SHORTSTACK

I tried making some more stuff today but everything looked bad.
BECAUSE I forgot to specify the breast size… and that was lowering overall quality lol

My Poppy's voice project is on halt because I don't feel like watching movies just to cut audio.
Last one gave me so little, damn VA was switching between voices hard af even though it was only one.

  No. 7319

File: 1698978098899.png (497.1 KB ,512x720 , 00029-1220187055.png )

>xd

  No. 7323

>>7319
I do

  No. 7384

File: 1706453307281.png (913.33 KB ,699x694 , pops runner.PNG )

blade runner Poppy lol

  No. 7471

File: 1712241799737-0.png (859.39 KB ,832x1216 , 00044-3814088809.png )

File: 1712241799737-1.png (1004.45 KB ,832x1216 , 00076-814888908.png )

I dunno if Poppyfag is still lurking, but I'd like to know if you can give some tips to making a good gen.
Personally I feel like mine are kinda boring, and the hands also could get an improvement.
Are you still on 1.5 or did you make the jump to XL?

  No. 7472

File: 1712242390392.png (3.49 MB ,1536x2560 , 1712107098079416.png )

Take for comparison this gen, it feels much more attractive as in, colours, the lines are more defined, seems like there is more detail too.
Also nothing seems screwy.

  No. 7484

File: 1712602502486.png (9.69 KB ,270x218 , rip.PNG )

>>7471
>I dunno if Poppyfag is still lurking
I'm here everyday

>some tips

Try different models/LORAs, you can just copy prompts from people that use the same model.
Best site I know is https://civitai.com/images?tags=4

>hands

read some inpaint guides

>1.5 or XL

I didn't try XL, I had huge plans to learn Bender, pose and set specific places for the background.
But I spiralled into doing nothing creative and slacking off, then my drive ran out of space so I had to get rid of python and other SD related stuff.

>>7472
Wow that really does look good, not even giving off ai vibes.

  No. 7485

>>7484
>didn't try XL, I had huge plans to learn Bender, pose and set specific places for the background.
I see. I made those past gens with PonyXL because it's what anons recommended. I think it looks neat for gens that didn't take much hassle, but for some reason I don't think I want big tiddy now :C
>But I spiralled into doing nothing creative and slacking off, then my drive ran out of space so I had to get rid of python and other SD related stuff.
Oh that sucks. Hope you can get a new driver soonishly. And that you find reasons to go back to genning.

  No. 7527

File: 1715321262554.png (6.83 KB ,522x136 , AAAAAA.PNG )

>>7485
>I made those past gens with PonyXL
Good stuff Liz
>get a new driver soonishly
Look Liz I have new B: drive.
Not really, I'm trying to get koboldai working and screwed something up and it gave me new magical drive! :-DD

Also tried kobold.cpp something that is supposed to work with the most stupid ppl but it doesn't launch for me ahhh

  No. 7530

File: 1715359118659.png (29.37 KB ,666x455 , wtf.PNG )

Reinstalled on another drive so I can have space to do ai stuff again and holy shit what have I done hahah

SOMEHOW I screwed it up lol
Installed on Disk 0 C: BUT it doesn't boot up if the Disk 2 is gone!

And "SSD" particion is lost probably because of lack of letter but I'm not going to risk bricking my pc, since I don't have any flashdrives.

I will be a hackerman one day, you will fear me one day



[Return] [Go to top] [Catalog] [Post a Reply]
Delete Post [ ]
[ Home ] [ all ] [ liz / b / meta ] [ Rules / FAQ / Search / News ]