私的AI研究会 > AI_Program2
#ref(): File not found: "image_004_m.jpg" at page "AI_Program"
これまで検証してきた結果をもとに、Python で生成 AI プログラムを書く
| 画像から画像を生成する img2img |
| モデルの種類 | 基本画像サイズ | パイプライン作成オブジェクト |
| SD1.5 | 512x512 | StableDiffusionImg2ImgPipeline |
| SDXL | 1024x1024 | StableDiffusionXLImg2ImgPipeline |
## sd_030.py 画像から画像生成(img2img )
## model: beautifulRealistic_brav5.safetensors
import torch
from PIL import Image
from diffusers import StableDiffusionImg2ImgPipeline,DPMSolverMultistepScheduler, logging
from translate import Translator
logging.set_verbosity_error()
# モデルフォルダーのパス
model_path = "/StabilityMatrix/Data/Models/StableDiffusion/SD1.5/beautifulRealistic_brav5.safetensors"
image_path = "images/StableDiffusion_247.png"
# GPUを使う場合は"cuda" 使わない場合は"cpu"
device = 'cuda'
# seed 値
seed = 12345678
# パイプラインを作成
pipeline = StableDiffusionImg2ImgPipeline.from_single_file(
model_path,
torch_dtype = torch.float16,
).to(device)
# スケジューラ設定
pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config)
# プロンプト
trans = Translator('en','ja').translate
prompt_jp = '黒髪で短い髪の女性'
prompt = trans(prompt_jp)
src_image = Image.open(image_path)
# Generatorオブジェクト作成
generator = torch.Generator(device).manual_seed(seed)
print(f'Seed: {seed}, Model: {model_path}')
print(f'prompt : {prompt_jp} → {prompt}')
# 画像を生成
image = pipeline(
prompt = prompt,
image = src_image,
num_inference_steps = 30,
guidance_scale = 7,
strength = 0.6,
generator = generator
).images[0]
image.save("results/image_030.png")
(sd_test) PS > python sd_030.py Fetching 11 files: 100%|███████████████████████████████| 11/11 [00:00<?, ?it/s] Loading pipeline components...: 100%|████████████| 6/6 [00:00<00:00, 10.30it/s] Seed: 12345678, Model: /StabilityMatrix/Data/Models/StableDiffusion/SD1.5/beautifulRealistic_brav5.safetensors prompt : 黒髪で短い髪の女性 → a woman with short black hair 100%|██████████████████████████████████████████| 18/18 [00:01<00:00, 15.78it/s]
## sd_031.py 画像から画像生成 strength 強さを表すパラメータ
## model: beautifulRealistic_brav5.safetensors
import torch
from PIL import Image
from diffusers import StableDiffusionImg2ImgPipeline,DPMSolverMultistepScheduler, logging
from translate import Translator
import matplotlib.pyplot as plt
logging.set_verbosity_error()
# 画像生成
def image_generation(strength):
# パイプラインを作成
pipeline = StableDiffusionImg2ImgPipeline.from_single_file(
model_path,
torch_dtype = torch.float16,
).to(device)
# スケジューラ設定
pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config)
# Generatorオブジェクト作成
generator = torch.Generator(device).manual_seed(seed)
# 画像を生成
img = pipeline(
prompt = prompt,
image = src_image,
num_inference_steps = 30,
guidance_scale = 7,
strength = strength,
generator = generator
).images[0]
return img
# モデルフォルダーのパス
model_path = "/StabilityMatrix/Data/Models/StableDiffusion/SD1.5/beautifulRealistic_brav5.safetensors"
image_path = "images/StableDiffusion_247.png"
# GPUを使う場合は"cuda" 使わない場合は"cpu"
device = 'cuda'
# seed 値
seed = 12345678
# プロンプト
trans = Translator('en','ja').translate
prompt_jp = '黒髪で短い髪の女性'
#prompt_jp = 'テラスでコーヒーを飲む金髪の女性'
prompt = trans(prompt_jp)
src_image = Image.open(image_path)
print(f'Seed: {seed}, Model: {model_path}')
print(f'prompt : {prompt_jp} → {prompt}')
# 複数画像を生成
plt.figure(figsize = [6, 15.5], dpi = 100)
for i in range(10):
strength = 0.1 + i * 0.1
img = image_generation(strength)
plt.subplot(5, 2, i + 1, title = "strength = %.1f" % strength)
plt.imshow(img)
plt.axis('off')
# メモリー開放
if device == 'cuda':
torch.cuda.empty_cache()
elif device == 'mps':
torch.mps.empty_cache()
plt.tight_layout()
plt.savefig('results/image_031.png')
plt.close()
(sd_test) PS > python sd_031.py Seed: 12345678, Model: /StabilityMatrix/Data/Models/StableDiffusion/SD1.5/beautifulRealistic_brav5.safetensors prompt : 黒髪で短い髪の女性 → a woman with short black hair Fetching 11 files: 100%|███████████████████████████████| 11/11 [00:00<?, ?it/s] Loading pipeline components...: 100%|████████████| 6/6 [00:00<00:00, 15.31it/s] 100%|████████████████████████████████████████████| 3/3 [00:00<00:00, 16.70it/s] Fetching 11 files: 100%|███████████████████████████████| 11/11 [00:00<?, ?it/s] Loading pipeline components...: 100%|████████████| 6/6 [00:00<00:00, 26.95it/s] 100%|████████████████████████████████████████████| 6/6 [00:00<00:00, 26.25it/s] Fetching 11 files: 100%|███████████████████████████████| 11/11 [00:00<?, ?it/s] Loading pipeline components...: 100%|████████████| 6/6 [00:00<00:00, 34.83it/s] 100%|████████████████████████████████████████████| 9/9 [00:00<00:00, 25.62it/s] Fetching 11 files: 100%|███████████████████████████████| 11/11 [00:00<?, ?it/s] Loading pipeline components...: 100%|████████████| 6/6 [00:00<00:00, 34.48it/s] 100%|██████████████████████████████████████████| 12/12 [00:00<00:00, 25.21it/s] Fetching 11 files: 100%|███████████████████████████████| 11/11 [00:00<?, ?it/s] Loading pipeline components...: 100%|████████████| 6/6 [00:00<00:00, 22.46it/s] 100%|██████████████████████████████████████████| 15/15 [00:00<00:00, 24.61it/s] Fetching 11 files: 100%|████████████████████| 11/11 [00:00<00:00, 11032.36it/s] Loading pipeline components...: 100%|████████████| 6/6 [00:00<00:00, 34.50it/s] 100%|██████████████████████████████████████████| 18/18 [00:00<00:00, 24.45it/s] Fetching 11 files: 100%|███████████████████████████████| 11/11 [00:00<?, ?it/s] Loading pipeline components...: 100%|████████████| 6/6 [00:00<00:00, 29.71it/s] 100%|██████████████████████████████████████████| 21/21 [00:00<00:00, 24.55it/s] Fetching 11 files: 100%|███████████████████████████████| 11/11 [00:00<?, ?it/s] Loading pipeline components...: 100%|████████████| 6/6 [00:00<00:00, 34.32it/s] 100%|██████████████████████████████████████████| 24/24 [00:00<00:00, 24.17it/s] Fetching 11 files: 100%|███████████████████████████████| 11/11 [00:00<?, ?it/s] Loading pipeline components...: 100%|████████████| 6/6 [00:00<00:00, 34.67it/s] 100%|██████████████████████████████████████████| 27/27 [00:01<00:00, 24.19it/s] Fetching 11 files: 100%|███████████████████████████████| 11/11 [00:00<?, ?it/s] Loading pipeline components...: 100%|████████████| 6/6 [00:00<00:00, 23.52it/s] 100%|██████████████████████████████████████████| 30/30 [00:01<00:00, 24.25it/s]
| プロンプト | 日本語入力 | 自動英訳 |
| ① | 黒髪で短い髪の女性 | a woman with short black hair |
| ② | テラスでコーヒーを飲む金髪の女性 | Blonde drinking coffee on the terrace |
## sd_032.py 画像から画像生成 プロンプトの重要度(guidance_scale)
## model: beautifulRealistic_brav5.safetensors
import torch
from PIL import Image
from diffusers import StableDiffusionImg2ImgPipeline,DPMSolverMultistepScheduler, logging
from translate import Translator
import matplotlib.pyplot as plt
logging.set_verbosity_error()
# 画像生成
def image_generation(g_scale):
# パイプラインを作成
pipeline = StableDiffusionImg2ImgPipeline.from_single_file(
model_path,
torch_dtype = torch.float16,
).to(device)
# スケジューラ設定
pipeline.scheduler = DPMSolverMultistepScheduler.from_config(pipeline.scheduler.config)
# Generatorオブジェクト作成
generator = torch.Generator(device).manual_seed(seed)
# 画像を生成
img = pipeline(
prompt = prompt,
image = src_image,
num_inference_steps = 30,
guidance_scale = g_scale,
strength = 0.5,
generator = generator
).images[0]
return img
# モデルフォルダーのパス
model_path = "/StabilityMatrix/Data/Models/StableDiffusion/SD1.5/beautifulRealistic_brav5.safetensors"
image_path = "images/kaisendon.jpg"
# GPUを使う場合は"cuda" 使わない場合は"cpu"
device = 'cuda'
# seed 値
seed = 12345678
# プロンプト
trans = Translator('en','ja').translate
prompt_jp = 'ラーメン'
#prompt_jp = '鰻丼'
prompt = trans(prompt_jp)
src_image = Image.open(image_path)
print(f'Seed: {seed}, Model: {model_path}')
print(f'prompt : {prompt_jp} → {prompt}')
# 複数画像を生成
plt.figure(figsize = [6, 9.5], dpi = 100)
for i in range(6):
img = image_generation(i * 2)
plt.subplot(3, 2, i + 1, title = 'guidance_scale = %d' % (i * 2))
plt.imshow(img)
plt.axis('off')
# メモリー開放
if device == 'cuda':
torch.cuda.empty_cache()
elif device == 'mps':
torch.mps.empty_cache()
plt.tight_layout()
plt.savefig('results/image_032.png')
plt.close()
(sd_test) PS > python sd_032.py Seed: 12345678, Model: /StabilityMatrix/Data/Models/StableDiffusion/SD1.5/beautifulRealistic_brav5.safetensors prompt : ラーメン → Ramen Fetching 11 files: 100%|███████████████████████████████| 11/11 [00:00<?, ?it/s] Loading pipeline components...: 100%|████████████| 6/6 [00:00<00:00, 14.86it/s] 100%|██████████████████████████████████████████| 15/15 [00:02<00:00, 7.26it/s] Fetching 11 files: 100%|█████████████████████| 11/11 [00:00<00:00, 8801.48it/s] Loading pipeline components...: 100%|████████████| 6/6 [00:00<00:00, 33.02it/s] 100%|██████████████████████████████████████████| 15/15 [00:03<00:00, 3.87it/s] Fetching 11 files: 100%|███████████████████████████████| 11/11 [00:00<?, ?it/s] Loading pipeline components...: 100%|████████████| 6/6 [00:00<00:00, 33.53it/s] 100%|██████████████████████████████████████████| 15/15 [00:03<00:00, 3.87it/s] Fetching 11 files: 100%|███████████████████████████████| 11/11 [00:00<?, ?it/s] Loading pipeline components...: 100%|████████████| 6/6 [00:00<00:00, 33.18it/s] 100%|██████████████████████████████████████████| 15/15 [00:03<00:00, 3.86it/s] Fetching 11 files: 100%|███████████████████████████████| 11/11 [00:00<?, ?it/s] Loading pipeline components...: 100%|████████████| 6/6 [00:00<00:00, 22.71it/s] 100%|██████████████████████████████████████████| 15/15 [00:03<00:00, 3.86it/s] Fetching 11 files: 100%|███████████████████████████████| 11/11 [00:00<?, ?it/s] Loading pipeline components...: 100%|████████████| 6/6 [00:00<00:00, 33.00it/s] 100%|██████████████████████████████████████████| 15/15 [00:03<00:00, 3.86it/s]
| プロンプト | 日本語入力 | 自動英訳 |
| ① | ラーメン | Ramen |
| ② | 鰻丼 | Eel Rice Bowl |
## sd_033.py【SDXL】モデル合成(refiner)
## model: animexlXuebimix_v60LCM.safetensors
## fudukiMix_v20.safetensors
import torch
from PIL import Image
from diffusers import StableDiffusionXLPipeline, StableDiffusionXLImg2ImgPipeline, EulerAncestralDiscreteScheduler, logging
from translate import Translator
import matplotlib.pyplot as plt
logging.set_verbosity_error()
# モデルフォルダーのパス
model_base_path = "/StabilityMatrix/Data/Models/StableDiffusion/animexlXuebimix_v60LCM.safetensors"
model_ref_path = "/StabilityMatrix/Data/Models/StableDiffusion/fudukiMix_v20.safetensors"
# GPUを使う場合は"cuda" 使わない場合は"cpu"
device = 'cuda'
# seed 値
seed = 12345678
# ベースモデルのパイプライン
pipe_base = StableDiffusionXLPipeline.from_single_file(
model_base_path,
torch_dtype = torch.float16
).to(device)
# スケジューラー設定
pipe_base.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe_base.scheduler.config)
# Generatorオブジェクト作成
generator = torch.Generator(device).manual_seed(seed)
# リファイナーモデルのパイプライン
pipe_ref = StableDiffusionXLImg2ImgPipeline.from_single_file(
model_ref_path,
torch_dtype = torch.float16,
scheduler = pipe_base.scheduler # スケジューラーを統一
).to(device)
# プロンプト
trans = Translator('en','ja').translate
prompt_jp = '猫を抱いている短い髪のの女性'
prompt = trans(prompt_jp)
print(f'Seed: {seed}')
print(f'Model1: {model_base_path}')
print(f'Model2: {model_ref_path}')
print(f'prompt : {prompt_jp} → {prompt}')
# ベースモデルで画像生成
img0 = pipe_base(
prompt,
num_inference_steps = 20,
generator = generator,
denoising_end = 0.4, # 途中で生成をやめると指定
output_type = 'latent' # 出力を潜在空間と指定
).images
# リファイナーモデルで画像生成
img = pipe_ref(
prompt,
image = img0,
num_inference_steps=20,
generator = generator,
denoising_start=0.4, # 生成を途中から続けると指定
).images[0]
img.save('results/image_033.png')(sd_test) PS > python sd_033.py Seed: 12345678, Model: /StabilityMatrix/Data/Models/StableDiffusion/SD1.5/beautifulRealistic_brav5.safetensors prompt : ラーメン → Ramen Fetching 11 files: 100%|███████████████████████████████| 11/11 [00:00<?, ?it/s] Loading pipeline components...: 100%|████████████| 6/6 [00:00<00:00, 14.86it/s] 100%|██████████████████████████████████████████| 15/15 [00:02<00:00, 7.26it/s] Fetching 11 files: 100%|█████████████████████| 11/11 [00:00<00:00, 8801.48it/s] Loading pipeline components...: 100%|████████████| 6/6 [00:00<00:00, 33.02it/s] 100%|██████████████████████████████████████████| 15/15 [00:03<00:00, 3.87it/s] Fetching 11 files: 100%|███████████████████████████████| 11/11 [00:00<?, ?it/s] Loading pipeline components...: 100%|████████████| 6/6 [00:00<00:00, 33.53it/s] 100%|██████████████████████████████████████████| 15/15 [00:03<00:00, 3.87it/s] Fetching 11 files: 100%|███████████████████████████████| 11/11 [00:00<?, ?it/s] Loading pipeline components...: 100%|████████████| 6/6 [00:00<00:00, 33.18it/s] 100%|██████████████████████████████████████████| 15/15 [00:03<00:00, 3.86it/s] Fetching 11 files: 100%|███████████████████████████████| 11/11 [00:00<?, ?it/s] Loading pipeline components...: 100%|████████████| 6/6 [00:00<00:00, 22.71it/s] 100%|██████████████████████████████████████████| 15/15 [00:03<00:00, 3.86it/s] Fetching 11 files: 100%|███████████████████████████████| 11/11 [00:00<?, ?it/s] Loading pipeline components...: 100%|████████████| 6/6 [00:00<00:00, 33.00it/s] 100%|██████████████████████████████████████████| 15/15 [00:03<00:00, 3.86it/s] Fetching 17 files: 100%|███████████████████████████████| 17/17 [00:00<?, ?it/s] Loading pipeline components...: 100%|████████████| 7/7 [00:01<00:00, 4.16it/s] Fetching 17 files: 100%|████████████████████| 17/17 [00:00<00:00, 17009.34it/s] Loading pipeline components...: 100%|████████████| 7/7 [00:01<00:00, 6.38it/s] Seed: 12345678 Model1: /StabilityMatrix/Data/Models/StableDiffusion/animexlXuebimix_v60LCM.safetensors Model2: /StabilityMatrix/Data/Models/StableDiffusion/fudukiMix_v20.safetensors prompt : 猫を抱いている短い髪のの女性 → a short-haired woman holding a cat 100%|████████████████████████████████████████████| 8/8 [02:04<00:00, 15.57s/it] 100%|██████████████████████████████████████████| 12/12 [03:47<00:00, 18.99s/it]