<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>ComfyUI &#8211; 0260</title>
	<atom:link href="https://0to60.top/tag/comfyui/feed/" rel="self" type="application/rss+xml" />
	<link>https://0to60.top</link>
	<description>Zero To More</description>
	<lastBuildDate>Wed, 17 Sep 2025 07:56:32 +0000</lastBuildDate>
	<language>zh-Hans</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9</generator>

 
	<item>
		<title>ComfyUI，一致性角色实战篇</title>
		<link>https://0to60.top/comfyui%ef%bc%8c%e4%b8%80%e8%87%b4%e6%80%a7%e8%a7%92%e8%89%b2%e5%ae%9e%e6%88%98%e7%af%87/</link>
					<comments>https://0to60.top/comfyui%ef%bc%8c%e4%b8%80%e8%87%b4%e6%80%a7%e8%a7%92%e8%89%b2%e5%ae%9e%e6%88%98%e7%af%87/#respond</comments>
		
		<dc:creator><![CDATA[burson]]></dc:creator>
		<pubDate>Mon, 15 Sep 2025 09:25:00 +0000</pubDate>
				<category><![CDATA[ComfyUI-SD]]></category>
		<category><![CDATA[ComfyUI]]></category>
		<category><![CDATA[Comfyui实践篇]]></category>
		<guid isPermaLink="false">https://0to60.top/?p=1015</guid>

					<description><![CDATA[<p>引言 为避免陷入学习陷阱，以阶段性目&#46;&#46;&#46;</p>
<p><a rel="nofollow" href="https://0to60.top/comfyui%ef%bc%8c%e4%b8%80%e8%87%b4%e6%80%a7%e8%a7%92%e8%89%b2%e5%ae%9e%e6%88%98%e7%af%87/">ComfyUI，一致性角色实战篇</a>最先出现在<a rel="nofollow" href="https://0to60.top">0260</a>。</p>
]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading">引言</h2>



<p>为避免陷入学习陷阱，以阶段性目标为导引，约束学习路径。</p>



<p>本期学习目标为实现一致性角色，最终表现为训练一个Flux LoRa模型，整体思路为：</p>



<ul class="wp-block-list">
<li>人物角色初始化：得到一张满意的T-POSE正面全身照和高清的头部图像。</li>



<li>角色多视角实现：基于初始化角色，得到不同视角的的图片。</li>



<li>丰富角色表情：给所有图片增加表情。</li>



<li>丰富背景：给增加表情后的图片增加背景。（至此，得到训练模型的图片素材）</li>



<li>素材处理：给素材批量修改大小及打标。</li>



<li>模型训练。</li>
</ul>



<h2 class="wp-block-heading">一、人物角色初始化</h2>



<p>此步骤较为简单，目标有两个。</p>



<ul class="wp-block-list">
<li>获得一张T-POSE正面全身照。</li>



<li>得到头部的高清放大图，作为之后脸部特征的依据。</li>
</ul>



<h3 class="wp-block-heading">1.1 T-Pose角色图生成</h3>



<p>此处采用的是FLUX模型。</p>



<p>一般来说，获取一个角色要么随机，要么是想这个角色带有某个目标人物的脸部特征。在Flux中，可加入一个PuLID模块，用于提取目标人物的脸部特征。</p>



<p>工作流图示如下：</p>



<figure class="wp-block-image size-full"><img fetchpriority="high" decoding="async" width="1920" height="1080" src="https://0to60.top/wp-content/uploads/2025/09/image.png" alt="" class="wp-image-1018" srcset="https://0to60.top/wp-content/uploads/2025/09/image.png 1920w, https://0to60.top/wp-content/uploads/2025/09/image-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-700x394.png 700w" sizes="(max-width: 1920px) 100vw, 1920px" /></figure>



<h3 class="wp-block-heading">1.2 头部高清放大</h3>



<p>基本思路：</p>



<ul class="wp-block-list">
<li>提取T-Pose角色的头部区域</li>



<li>SD放大+FaceDetail，SD放大过程中为防止和原图相差过大，通过图片反推提示词+ControlNet_Tile保留原图特征。</li>



<li>为保留原角色图片色彩风格，增加一个色彩匹配节点。</li>
</ul>



<p>工作流图示如下：</p>



<figure class="wp-block-image size-full"><img decoding="async" width="1920" height="1080" src="https://0to60.top/wp-content/uploads/2025/09/image-1.png" alt="" class="wp-image-1020" srcset="https://0to60.top/wp-content/uploads/2025/09/image-1.png 1920w, https://0to60.top/wp-content/uploads/2025/09/image-1-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-1-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-1-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-1-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-1-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-1-700x394.png 700w" sizes="(max-width: 1920px) 100vw, 1920px" /></figure>



<h2 class="wp-block-heading">二、角色多视角图</h2>



<p>此步骤主要目的是基于第一步得到的正面T-Pose图片得到多视角图，整体思路为：</p>



<ul class="wp-block-list">
<li>获取多视角参考图，为后续的图片生成提供Controlnet引导(Pose，depth，cany)。(如果已经有了多视角CN控制图，可以省略此步)</li>



<li>基于T-Pose图，在多视角参考图提供的CN控制下，生成多视角图片。</li>



<li>由于参考生图通常会使得面部特征相差较大，还需进行脸部处理，用换脸思路即可。</li>
</ul>



<h3 class="wp-block-heading">2.1 获取多视角参考图</h3>



<p>可以在网上搜寻多视角的参考图，关键词如：pose sheet，但通常很难得到能直接用的参考图。</p>



<figure class="wp-block-image size-large"><img decoding="async" width="1024" height="486" src="https://0to60.top/wp-content/uploads/2025/09/image-3-1024x486.png" alt="" class="wp-image-1022" srcset="https://0to60.top/wp-content/uploads/2025/09/image-3-1024x486.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-3-300x142.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-3-768x364.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-3-1536x729.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-3.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>有些UP/博主也会直接分享通用的CN控制图，如下图，但抽卡表现上稳定性不高。</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="297" height="300" src="https://0to60.top/wp-content/uploads/2025/09/image-2.png" alt="" class="wp-image-1021"/></figure>



<p>在多次实践后，为生成较为稳定的结果，本次采用的思路是：</p>



<ul class="wp-block-list">
<li>利用MV-Adapter，基于T-Pose图先生成<strong>六视角</strong>图，虽然细节特征会相差较大，但整体身型/色彩的一致性有较好表现，之后提取的CN控制图会较为契合目标。</li>
</ul>



<p>工作流图示如下：</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1920" height="1080" src="https://0to60.top/wp-content/uploads/2025/09/image-4.png" alt="" class="wp-image-1023" srcset="https://0to60.top/wp-content/uploads/2025/09/image-4.png 1920w, https://0to60.top/wp-content/uploads/2025/09/image-4-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-4-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-4-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-4-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-4-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-4-700x394.png 700w" sizes="auto, (max-width: 1920px) 100vw, 1920px" /></figure>



<h3 class="wp-block-heading">2.2 生成多视角角色图片</h3>



<p><strong>前置准备：</strong></p>



<p>T-Pose图提供整体身型、服饰、发型等元素信息，多视角参考图提供CN-控制信息(本次使用是Pose图)。</p>



<p><strong>整体思路：</strong></p>



<ul class="wp-block-list">
<li>利用Fill+Redux的组合，可以生成和T-Pose相似的图片(整体身型，内容，色彩等)，但缺点是姿势不能控制。（PS: Comfyui自带的Redux相当强大，极难更改姿势信息，建议使用Redux Adv）</li>



<li>在Fill+Redux的基础上，增加Controlnet模块，控制姿势。</li>
</ul>



<p>工作流核心图示如下：</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1920" height="1080" src="https://0to60.top/wp-content/uploads/2025/09/image-5.png" alt="" class="wp-image-1024" srcset="https://0to60.top/wp-content/uploads/2025/09/image-5.png 1920w, https://0to60.top/wp-content/uploads/2025/09/image-5-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-5-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-5-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-5-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-5-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-5-700x394.png 700w" sizes="auto, (max-width: 1920px) 100vw, 1920px" /></figure>



<p>Fill+Redux组合工作流采用后得到想要的多视角图片</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1920" height="1080" src="https://0to60.top/wp-content/uploads/2025/09/image-6.png" alt="" class="wp-image-1025" srcset="https://0to60.top/wp-content/uploads/2025/09/image-6.png 1920w, https://0to60.top/wp-content/uploads/2025/09/image-6-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-6-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-6-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-6-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-6-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-6-700x394.png 700w" sizes="auto, (max-width: 1920px) 100vw, 1920px" /></figure>



<h3 class="wp-block-heading"><strong>2.3 </strong>换脸</h3>



<p>2.2 生成的多视角图虽然整体内容非常相似，但在脸部特征上会有较大区别，所以会在此基础上，利用换脸工作流进行脸部特征处理，此时1.2步骤中得到的头部图就起到了比较好的作用。</p>



<p>换脸工作流较多，SD1,5/XL或Flux都有较为成熟的处理方法，本次采用的是Flux，整体思路为：</p>



<ul class="wp-block-list">
<li>将目标图片(需要换脸的图片)的头部区域截取下来。</li>



<li>将其Resize到100W像素左右(+-30%)，Flux在此大小上生成效果较好。</li>



<li>通过Fill重绘，对截取后图片的脸部区域进行重绘，此处同时采用了PuLID+Redux+Ace(Potrait)进行脸部特征提取。</li>



<li>将重绘后的图片还原到目标图片上。</li>
</ul>



<p>工作流核心图示如下：</p>



<p>重绘前的准备</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1920" height="1080" src="https://0to60.top/wp-content/uploads/2025/09/image-7.png" alt="" class="wp-image-1026" srcset="https://0to60.top/wp-content/uploads/2025/09/image-7.png 1920w, https://0to60.top/wp-content/uploads/2025/09/image-7-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-7-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-7-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-7-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-7-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-7-700x394.png 700w" sizes="auto, (max-width: 1920px) 100vw, 1920px" /></figure>



<p>Fill+Redux+Ace(Portrait)+PuLID应用：</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="576" src="https://0to60.top/wp-content/uploads/2025/09/image-9-1024x576.png" alt="" class="wp-image-1028" srcset="https://0to60.top/wp-content/uploads/2025/09/image-9-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-9-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-9-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-9-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-9-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-9-700x394.png 700w, https://0to60.top/wp-content/uploads/2025/09/image-9.png 1920w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>将重绘后的脸部区域还原到目标图片中</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1920" height="1080" src="https://0to60.top/wp-content/uploads/2025/09/image-10.png" alt="" class="wp-image-1029" srcset="https://0to60.top/wp-content/uploads/2025/09/image-10.png 1920w, https://0to60.top/wp-content/uploads/2025/09/image-10-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-10-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-10-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-10-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-10-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-10-700x394.png 700w" sizes="auto, (max-width: 1920px) 100vw, 1920px" /></figure>



<h2 class="wp-block-heading">三、丰富角色表情</h2>



<p>此步骤的目标为生成的多视角图进行表情处理，整体思路为：</p>



<ul class="wp-block-list">
<li>图片剪裁，只保留肩部以上区域。这样大多的素材会集中在头部。</li>



<li>表情移植</li>
</ul>



<h3 class="wp-block-heading">3.1 图片剪裁</h3>



<p>略，随便搭个工作流即能批量处理即可</p>



<h3 class="wp-block-heading">3.2 表情移植</h3>



<p>表情移植的一个准备工作是找到表情参考图片，方法有很多，这里列举一些：</p>



<ul class="wp-block-list">
<li>可以自己生成一些表情图，但是通过基本模型生成的人物表情不够丰富，可以寻找一些表情LoRa.</li>



<li>寻找一些现成的表情参考图，如搜索关键词：Expression Sheet.</li>
</ul>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1920" height="911" src="https://0to60.top/wp-content/uploads/2025/09/image-12.png" alt="" class="wp-image-1032" srcset="https://0to60.top/wp-content/uploads/2025/09/image-12.png 1920w, https://0to60.top/wp-content/uploads/2025/09/image-12-300x142.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-12-1024x486.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-12-768x364.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-12-1536x729.png 1536w" sizes="auto, (max-width: 1920px) 100vw, 1920px" /></figure>



<p>有了表情参考图后，就可以利用livePortrait进行表情移植，值得一提的是，LP在不同风格（写实、3D、2D）的表情移植上不如人意，特别是眼睛的表达上，在此，经过大量实践，得到以下工作流。</p>



<p>工作流图示如下：</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1920" height="1080" src="https://0to60.top/wp-content/uploads/2025/09/image-11.png" alt="" class="wp-image-1031" srcset="https://0to60.top/wp-content/uploads/2025/09/image-11.png 1920w, https://0to60.top/wp-content/uploads/2025/09/image-11-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-11-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-11-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-11-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-11-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-11-700x394.png 700w" sizes="auto, (max-width: 1920px) 100vw, 1920px" /></figure>



<h2 class="wp-block-heading">四、丰富背景</h2>



<p>丰富背景的方式有很多，Flux也可以利用Fill+Redux重绘的方式进行背景更换，本次采用的是IC-Light，截止写作为止，IC-Light本地部署只支持了SD15，效果上，会兼顾背景和光影效果。</p>



<p>工作流图示如下：</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1920" height="1080" src="https://0to60.top/wp-content/uploads/2025/09/image-13.png" alt="" class="wp-image-1033" srcset="https://0to60.top/wp-content/uploads/2025/09/image-13.png 1920w, https://0to60.top/wp-content/uploads/2025/09/image-13-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-13-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-13-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-13-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-13-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-13-700x394.png 700w" sizes="auto, (max-width: 1920px) 100vw, 1920px" /></figure>



<p>至此训练素材就生成完毕(生成过程中，其实也可以用InstantID保持脸部特征)</p>



<figure class="wp-block-video"><video height="906" style="aspect-ratio: 680 / 906;" width="680" controls src="https://0to60.top/wp-content/uploads/2025/09/20250915164626_rec_.mp4"></video></figure>



<h2 class="wp-block-heading">五、素材处理</h2>



<h3 class="wp-block-heading">5.1 图片大小的预处理</h3>



<p>由于本次生成图片都是768*768，比较符合训练需要，所以实践中赞未涉及。</p>



<p>略</p>



<h3 class="wp-block-heading"><strong>5.2 图片批量打标</strong></h3>



<p>很多模型训练工具都提供了打标功能，如Fluxgym、秋叶模型训练器等，但是都有其缺点，目前采用的是joy caption，注意：触发提示词尽量要用一个不常见的词，实践中用的Utwin其实并不可取。</p>



<p>工作流图示如下：</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1920" height="1080" src="https://0to60.top/wp-content/uploads/2025/09/image-14.png" alt="" class="wp-image-1036" srcset="https://0to60.top/wp-content/uploads/2025/09/image-14.png 1920w, https://0to60.top/wp-content/uploads/2025/09/image-14-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-14-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-14-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-14-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-14-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-14-700x394.png 700w" sizes="auto, (max-width: 1920px) 100vw, 1920px" /></figure>



<h2 class="wp-block-heading"><strong>六、模型训练</strong></h2>



<p>模型训练，可以随便选一个训练器，本次实践中采用的是秋叶模型训练器。</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1920" height="1080" src="https://0to60.top/wp-content/uploads/2025/09/image-15.png" alt="" class="wp-image-1037" srcset="https://0to60.top/wp-content/uploads/2025/09/image-15.png 1920w, https://0to60.top/wp-content/uploads/2025/09/image-15-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-15-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-15-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-15-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-15-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-15-700x394.png 700w" sizes="auto, (max-width: 1920px) 100vw, 1920px" /></figure>



<p>主要参数：</p>



<ul class="wp-block-list">
<li>每张图片训练次数/epoch，没地方设置，但是实际是5次/img，当你开始训练时，会将图片路径下的所有图片打包到一个文件夹，以times_randomtext的形式进行命名，times即表示每张图片训练多少次。</li>



<li>epoch总数，实践设置为10</li>



<li>每2epoch保存一次模型</li>



<li>network_dims为32</li>



<li>学习率：1e-4</li>
</ul>



<p>这是第一次进行模型训练，有很多影响因素值得研究，仅这次而言。</p>



<ul class="wp-block-list">
<li>模型明显将发型及晚礼服肩带特征学习进去，这不在目标范围，明显和素材及打标方式强相关。</li>



<li>脸部特征其实并没有达到我想要的程度，不过在3D风格上有意想不到的效果</li>
</ul>



<p>实际生图效果如下：</p>



<figure class="wp-block-image size-full lightbox"><img loading="lazy" decoding="async" width="1920" height="1080" src="https://0to60.top/wp-content/uploads/2025/09/image-16.png" alt="" class="wp-image-1038" srcset="https://0to60.top/wp-content/uploads/2025/09/image-16.png 1920w, https://0to60.top/wp-content/uploads/2025/09/image-16-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-16-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-16-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-16-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-16-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-16-700x394.png 700w" sizes="auto, (max-width: 1920px) 100vw, 1920px" /></figure>



<p>至此，人物角色一致性的实战就到此告一段落，每个步骤都有多种方式实现，之后会尝试以视频的方式进行分享讲解，敬请期待。</p>
<p><a rel="nofollow" href="https://0to60.top/comfyui%ef%bc%8c%e4%b8%80%e8%87%b4%e6%80%a7%e8%a7%92%e8%89%b2%e5%ae%9e%e6%88%98%e7%af%87/">ComfyUI，一致性角色实战篇</a>最先出现在<a rel="nofollow" href="https://0to60.top">0260</a>。</p>
]]></content:encoded>
					
					<wfw:commentRss>https://0to60.top/comfyui%ef%bc%8c%e4%b8%80%e8%87%b4%e6%80%a7%e8%a7%92%e8%89%b2%e5%ae%9e%e6%88%98%e7%af%87/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure url="https://0to60.top/wp-content/uploads/2025/09/20250915164626_rec_.mp4" length="4916764" type="video/mp4" />

			</item>
		<item>
		<title>ComfyUI，Flux入门总结篇</title>
		<link>https://0to60.top/comfyui%ef%bc%8cflux%e5%85%a5%e9%97%a8%e6%80%bb%e7%bb%93%e7%af%87/</link>
					<comments>https://0to60.top/comfyui%ef%bc%8cflux%e5%85%a5%e9%97%a8%e6%80%bb%e7%bb%93%e7%af%87/#respond</comments>
		
		<dc:creator><![CDATA[burson]]></dc:creator>
		<pubDate>Sun, 14 Sep 2025 03:46:59 +0000</pubDate>
				<category><![CDATA[ComfyUI-SD]]></category>
		<category><![CDATA[ComfyUI]]></category>
		<category><![CDATA[FLUX]]></category>
		<guid isPermaLink="false">https://0to60.top/?p=940</guid>

					<description><![CDATA[<p>引言 本篇文章的阅读前置条件(必须)&#46;&#46;&#46;</p>
<p><a rel="nofollow" href="https://0to60.top/comfyui%ef%bc%8cflux%e5%85%a5%e9%97%a8%e6%80%bb%e7%bb%93%e7%af%87/">ComfyUI，Flux入门总结篇</a>最先出现在<a rel="nofollow" href="https://0to60.top">0260</a>。</p>
]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading">引言</h2>



<p>本篇文章的阅读前置条件(必须)：</p>



<ul class="wp-block-list">
<li>需阅读<a href="https://0to60.top/2025/07/09/comfyui%ef%bc%8csd1-5-xl%e5%9f%ba%e7%a1%80%e6%80%bb%e7%bb%93%e7%af%87/" data-type="post" data-id="931">《ComfyUI，SD基础总结篇》</a></li>
</ul>



<p>介绍内容包含：</p>



<ul class="wp-block-list">
<li>核心概念的理解</li>



<li>基本工作流及主要控制模块总结</li>
</ul>



<p>在<a href="https://0to60.top/2025/07/09/comfyui%ef%bc%8csd1-5-xl%e5%9f%ba%e7%a1%80%e6%80%bb%e7%bb%93%e7%af%87/" data-type="post" data-id="931">《ComfyUI，SD基础总结篇》</a>中，我们已经了解了 Stable Diffusion 1.5 和 SDXL。它们就像拥有精湛技艺的<strong>“雕塑家”</strong>，从“混沌石料”中一步步雕刻出令人惊叹的作品，代表着当前主流的 AI 图像生成技术。</p>



<p></p>



<p>现在，是时候揭开 AI 绘画领域另一条前沿技术路径的面纱了——介绍 Flux。Flux 不再是 SDXL 的简单升级，它代表了图像生成方式上的一种架构创新，为 AI 绘画带来了新的可能性。它并非要取代 SDXL，而是提供了一个不同的、有潜力的技术方向。</p>



<h2 class="wp-block-heading">一、Flux核心理解</h2>



<p>虽然在底层技术原理上，会有较大的区别。但在应用层的理解上，我们仍然可以使用SD15/XL的理解<a href="https://0to60.top/2025/07/09/comfyui%ef%bc%8csd1-5-xl%e5%9f%ba%e7%a1%80%e6%80%bb%e7%bb%93%e7%af%87/" data-type="post" data-id="931" target="_blank" rel="noreferrer noopener nofollow">《ComfyUI，SD基础总结篇》</a>，一句话：</p>



<p><strong><strong><strong>独具特色的雕塑家拿着原始石料在设计需求说明书的指引下按某种雕塑策略进行雕塑创作</strong></strong></strong>，同样的，也有以下核心概念。</p>



<ul class="wp-block-list">
<li><strong>雕塑家：</strong>Diffusion Model，每个雕塑家都有自己的特色，如：Flux1-dev，Flux1-fill-dev，Flux1-cany/depth-dev，Flux1-kontext等。</li>



<li><strong>原始石料：</strong>latent image，潜空间图像，SD1.5&amp;SDXL&amp;FLUX都是在潜空间进行图像生成的，至于为什么需要在潜空间中进行，最大原因是能极大的压缩空间，使得寻常硬件得以计算。总之，可以将其理解为创作所需的原始石料，以下如何获取原始石料的方式：
<ul class="wp-block-list">
<li>直接提供latent image，设定长宽。</li>



<li>将原始空间图像(寻常人眼感知的图片)通过VAE Encode的方式转换成潜空间图像。</li>
</ul>
</li>



<li><strong>作品去色/上色器</strong>：VAE Encode/Decode，将原始空间图像(人眼寻常感知的图片)转变成潜空间图像，这个过程也称为编码Encode，好比对雕塑成品进行去色，从而得到石料本体；同时也支持从潜空间图像转换成原始空间图像，这个过程称为Decode，好比对雕塑完毕的石料本体进行上色。这个去色和上色规则记录在VAE model中，即需要加载这个模型。</li>



<li><strong>设计需求说明书</strong>：Text Embedding(pos + neg)，用于指导雕塑家进行雕刻，雕塑家每次雕刻都需参照设计需求说明书。如何获得需求设计说明书：
<ul class="wp-block-list">
<li>通过一个转换器Text Embedding Encode将自然语言(人类语言，通常是英文)转换成雕塑家能理解的设计需求说明书，转换规则被记录在CLIP model中，即需要加载这个模型。</li>
</ul>
</li>



<li><strong>雕塑策略</strong>：调度器(scheduler)、去噪步数(steps)，采样器(sampler)。其好比雕刻策略，先制定整个雕刻计划(“路线图”)，决定了在整个生成过程需要去除的石料应该如何逐渐减少；再定义执行步骤数量，越多，耗费时间越长，就越细致；最后，基于路线图，在每一步中具体执行雕刻动作，计算并将多余的石料进行雕刻去除。</li>
</ul>



<h2 class="wp-block-heading">二、核心组件/概念理解</h2>



<p>主要介绍与SD15/XL有区别的一些组件和概念</p>



<ul class="wp-block-list">
<li><strong>DualCLIPLoader：</strong>和在SD15及XL中一样，Flux依然需要一个CLIP模型将自然语言转换成设计需求说明书，只是这里需要指定两个模型：CLIP_I和CLIP_T5。其设计思路与SDXL基本一致。</li>



<li><strong>LoadVae：</strong>与SD15和XL不一样的是，Flux在使用上会单独指定配套的VAE模型，一般名称叫ae.sft或者ae.safetensors.</li>



<li><strong>FluxGuidance：</strong>此处与SD15和XL有较大区别，对标其CFG。
<ul class="wp-block-list">
<li>Flux将引导条件融合到一起，不再区分positive和negative。（若使用ksampler，negative可接空条件）</li>



<li>在flux的实际使用中，不再使用cfg参数，取代而代之的是guidance参数，表示条件引导的程度。（若依然使用ksampler，将其设置为1，否则会产生糊图。）</li>



<li>guidance的参数值，在Flux1-dev中，一般设置为3.5，在Fill+redux组合使用中，一般在[30-50]中选择。</li>
</ul>
</li>



<li><strong>Redux：</strong>Flux提供的配套风格模型，就实际使用效果来说，其实就是通过Clip vision encode提取参考图的细节信息，再通过apply style model(redux模型作为规则转换器)并将其补充到<strong>设计需求说明书</strong>中，值得注意的是，redux的引导强度相当强烈，若使用原生自带的节点，基本上很难和CN及其他控制模块共同作用与图像生成，建议使用redux adv节点(实测有效)。</li>
</ul>



<h2 class="wp-block-heading"><strong>三、Flux工作流</strong>总结</h2>



<p>在实践探索过程中，FLUX在思路上和SD15/XL并没有什么大的差别。先学习了基本文生图/图生图/重绘之后，再加入一些其他控制模块，主要包括：</p>



<ul class="wp-block-list">
<li>LoRa：有一些比较有意思的LoRa，包括消除LoRa，图像编辑LoRa</li>



<li>IP-Adapter：对雕塑家进行增强训练(风格内容参考）</li>



<li>PuLID：对嗲苏佳进行增加训练(脸部参考)</li>



<li>Controlnet：对需求设计说明书进行补充说明</li>



<li>Redux：对需求设计说明书进行补充说明</li>
</ul>



<p>这样就能得到FlUX工作流的各种变体。</p>



<p>本部分主要介绍内容：</p>



<ul class="wp-block-list">
<li>FLUX.1-DEV基本工作流</li>



<li>FLUX.1-FILL-DEV基本工作流</li>



<li>Redux模块&amp;其余控制模块，和SD15/XL其实并为太大区别</li>
</ul>



<p>FLUX.1-cany/depth模型并无太多应用场景，此处主要对FLUX特有的模块进行进行介绍，另外对FLUX.1-DEV和FLUX.1-FILL-DEV模型进行工基本作流总结。</p>



<h3 class="wp-block-heading">3.2 FLUX.1-DEV 工作流</h3>



<h4 class="wp-block-heading">3.2.1 基本文生图</h4>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="576" src="https://0to60.top/wp-content/uploads/2025/09/image-17-1024x576.png" alt="" class="wp-image-1058" srcset="https://0to60.top/wp-content/uploads/2025/09/image-17-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-17-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-17-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-17-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-17-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-17-700x394.png 700w, https://0to60.top/wp-content/uploads/2025/09/image-17.png 1920w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>和SD15/XL差别不大，主要区别在于：</p>



<ul class="wp-block-list">
<li>model/clip/vae的加载方式不一样：
<ul class="wp-block-list">
<li>model的加载节点是Load Diffusion Model</li>



<li>clip是dualcliploader，需要指定一个Clip_I和T5模型</li>



<li>vae也要单独指定与Flux配套的模型，ae.sft</li>
</ul>
</li>



<li>ksampler使用变化
<ul class="wp-block-list">
<li>中的cfg参数固定为1(该参数已经没啥用了)，设计需求说明书的遵循程度变为了另一个flux参数：guidance，越高，约束能力越强，反之越能发挥自由度。</li>



<li>由于Flux不再区分正负prompt，使用ksampler时，neg-prompt为空即可。</li>
</ul>
</li>
</ul>



<h4 class="wp-block-heading">3.2.2 基本图生图</h4>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="576" src="https://0to60.top/wp-content/uploads/2025/09/image-18-1024x576.png" alt="" class="wp-image-1060" srcset="https://0to60.top/wp-content/uploads/2025/09/image-18-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-18-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-18-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-18-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-18-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-18-700x394.png 700w, https://0to60.top/wp-content/uploads/2025/09/image-18.png 1920w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>没什么好说的，一般来说很少用到直接图生图的情况，目前就两种：</p>



<ul class="wp-block-list">
<li>放大图生图重绘增加细节</li>



<li>粗略的风格转绘</li>
</ul>



<h3 class="wp-block-heading">3.2 FLUX.1-FILL-DEV工作流</h3>



<h4 class="wp-block-heading">3.2.1 局部重绘（inpaint）</h4>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="576" src="https://0to60.top/wp-content/uploads/2025/09/image-23-1024x576.png" alt="" class="wp-image-1075" srcset="https://0to60.top/wp-content/uploads/2025/09/image-23-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-23-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-23-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-23-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-23-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-23-700x394.png 700w, https://0to60.top/wp-content/uploads/2025/09/image-23.png 1920w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>大方向和SD15/XL类型，在使用上有以下差异：</p>



<ul class="wp-block-list">
<li>重绘时，是用FILL模型</li>



<li>基本工作流中，FluxXGuidance参数一般设置为30，也可以尝试其他的。</li>



<li>FILL模型的所有工作流中，都建议加入一个节点Differential Diffusion，它的核心作用正是<strong>参照一个已有的图像，从而让新生成的内容在风格、色彩或氛围上与之保持协调一致</strong>。</li>
</ul>



<h4 class="wp-block-heading">3.2.2 扩展绘制（outpaint）</h4>



<p>所谓的扩展绘制其实和重绘区别不大，只是不图片原有区域进行绘制，增加新的区域进行内容补充。</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="576" src="https://0to60.top/wp-content/uploads/2025/09/image-24-1024x576.png" alt="" class="wp-image-1076" srcset="https://0to60.top/wp-content/uploads/2025/09/image-24-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-24-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-24-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-24-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-24-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-24-700x394.png 700w, https://0to60.top/wp-content/uploads/2025/09/image-24.png 1920w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<h3 class="wp-block-heading">3.3 FLUX其他控制模块</h3>



<h4 class="wp-block-heading"><strong>3.3.1 一些好玩的LoRa</strong></h4>



<ul class="wp-block-list">
<li>消除LoRa，结合FILL模型，可以很好的消除不需要的元素，也能起到修复效果。</li>



<li>加速LoRa，能让出图步数控制在8步甚至4步内。</li>



<li>编辑LoRa，结合FILL模型，能实现图像编辑效果。具体在此不再细说</li>
</ul>



<h4 class="wp-block-heading">3.3.2 FLUX.1-Redux-DEV 模块</h4>



<p>Redux是Flux中特有的一个模块，使用工作流图示如下：</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="576" src="https://0to60.top/wp-content/uploads/2025/09/image-19-1024x576.png" alt="" class="wp-image-1061" srcset="https://0to60.top/wp-content/uploads/2025/09/image-19-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-19-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-19-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-19-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-19-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-19-700x394.png 700w, https://0to60.top/wp-content/uploads/2025/09/image-19.png 1920w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>基于输入输出来看这个模块，其实就两个步骤：</p>



<ul class="wp-block-list">
<li>其实就是将图片信息抽取出来(CLIP Vision Encode)，CLIP Vision模型提供抽取规则。</li>



<li>再将抽取的信息通过一个转化器（Apply Style Model）增加到需求设计说明书中(Embedding)，Redux模型提供转换规则。</li>
</ul>



<p>就实践而言，Redux对整体的色彩风格、内容、结构都提取得比较到位，但是在一些细节特征上表现不太好(如脸部特征)。</p>



<h4 class="wp-block-heading">3.3.3 Flux IPAdapter模块</h4>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="576" src="https://0to60.top/wp-content/uploads/2025/09/image-20-1024x576.png" alt="" class="wp-image-1072" srcset="https://0to60.top/wp-content/uploads/2025/09/image-20-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-20-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-20-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-20-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-20-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-20-700x394.png 700w, https://0to60.top/wp-content/uploads/2025/09/image-20.png 1920w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>使用上，和SD15/XL区别不大，需使用xlabs团队提供的x-flux-comfyui插件</p>



<p>基于输入输出来看这个模块，</p>



<ul class="wp-block-list">
<li>其实就是先基于CLIP Vision模型的规则将图片信息抽取出，在通过FLUX Adapter模型对雕塑家(diffusion model)进行训练。</li>
</ul>



<h4 class="wp-block-heading">3.3.4 PuLID模块</h4>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="576" src="https://0to60.top/wp-content/uploads/2025/09/image-21-1024x576.png" alt="" class="wp-image-1073" srcset="https://0to60.top/wp-content/uploads/2025/09/image-21-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-21-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-21-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-21-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-21-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-21-700x394.png 700w, https://0to60.top/wp-content/uploads/2025/09/image-21.png 1920w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>对标FaceID，SDXL中也可使用。<br>使用FLUX版本需要结合PuLID-Flux-Enhanced插件使用。</p>



<p>用于提取脸部信息，并让雕塑家(diffusion model)增强训练学习.</p>



<h4 class="wp-block-heading">3.3.5 Controlnet</h4>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="576" src="https://0to60.top/wp-content/uploads/2025/09/image-25-1024x576.png" alt="" class="wp-image-1077" srcset="https://0to60.top/wp-content/uploads/2025/09/image-25-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-25-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-25-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-25-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-25-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-25-700x394.png 700w, https://0to60.top/wp-content/uploads/2025/09/image-25.png 1920w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>没什么好说的，为设计需求说明书增加额外条件信息。</p>



<h2 class="wp-block-heading">三、最后</h2>



<p>Flux的的变体工作流基本上就是基本工作流+控制模块的灵性组合，后续有机会可能出视频或者直播进行细致讲解，敬请期待。</p>
<p><a rel="nofollow" href="https://0to60.top/comfyui%ef%bc%8cflux%e5%85%a5%e9%97%a8%e6%80%bb%e7%bb%93%e7%af%87/">ComfyUI，Flux入门总结篇</a>最先出现在<a rel="nofollow" href="https://0to60.top">0260</a>。</p>
]]></content:encoded>
					
					<wfw:commentRss>https://0to60.top/comfyui%ef%bc%8cflux%e5%85%a5%e9%97%a8%e6%80%bb%e7%bb%93%e7%af%87/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>ComfyUI，SD基础总结篇</title>
		<link>https://0to60.top/comfyui%ef%bc%8csd1-5-xl%e5%9f%ba%e7%a1%80%e6%80%bb%e7%bb%93%e7%af%87/</link>
					<comments>https://0to60.top/comfyui%ef%bc%8csd1-5-xl%e5%9f%ba%e7%a1%80%e6%80%bb%e7%bb%93%e7%af%87/#respond</comments>
		
		<dc:creator><![CDATA[burson]]></dc:creator>
		<pubDate>Wed, 09 Jul 2025 04:36:59 +0000</pubDate>
				<category><![CDATA[ComfyUI-SD]]></category>
		<category><![CDATA[ComfyUI]]></category>
		<guid isPermaLink="false">https://0to60.top/?p=931</guid>

					<description><![CDATA[<p>引言 本文对SD基础知识进行一个总结&#46;&#46;&#46;</p>
<p><a rel="nofollow" href="https://0to60.top/comfyui%ef%bc%8csd1-5-xl%e5%9f%ba%e7%a1%80%e6%80%bb%e7%bb%93%e7%af%87/">ComfyUI，SD基础总结篇</a>最先出现在<a rel="nofollow" href="https://0to60.top">0260</a>。</p>
]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading"><strong>引言</strong></h2>



<p>本文对SD基础知识进行一个总结。</p>



<h2 class="wp-block-heading">一、核心概念理解</h2>



<p>在掌握了 Stable Diffusion 1.5 和 SDXL 的基础知识后，我们通过&#8221;雕塑&#8221;概念以一句话来总结一下：</p>



<p><strong>独具特色的雕塑家拿着原始石料在设计需求说明书的指引下按某种雕塑策略进行雕塑创作。</strong></p>



<ul class="wp-block-list">
<li><strong>雕塑家：</strong>Unet Model，每个雕塑家都有自己的特色，即每个Unet模型都有自己的特点。</li>



<li><strong>原始石料：</strong>latent image，潜空间图像，SD1.5&amp;SDXL都是在潜空间进行图像生成的，至于为什么需要在潜空间中进行，最大原因是能极大的压缩空间，使得寻常硬件得以计算。总之，可以将其理解为创作所需的原始石料，以下如何获取原始石料的方式：
<ul class="wp-block-list">
<li>直接提供latent image，设定长宽。</li>



<li>将原始空间图像(寻常人眼感知的图片)通过VAE Encode的方式转换成潜空间图像。</li>
</ul>
</li>



<li><strong>作品去色/上色器</strong>：VAE Encode/Decode，将原始空间图像(人眼寻常感知的图片)转变成潜空间图像，这个过程也称为编码Encode，好比对雕塑成品进行去色，从而得到石料本体；同时也支持从潜空间图像转换成原始空间图像，这个过程称为Decode，好比对雕塑完毕的石料本体进行上色。这个去色和上色规则记录在VAE model中，即需要加载这个模型。</li>



<li><strong>设计需求说明书</strong>：Text Embedding(pos + neg)，用于指导雕塑家进行雕刻，雕塑家每次雕刻都需参照设计需求说明书。如何获得需求设计说明书：
<ul class="wp-block-list">
<li>通过一个转换器Text Embedding Encode将自然语言(人类语言，通常是英文)转换成雕塑家能理解的设计需求说明书，转换规则被记录在CLIP model中，即需要加载这个模型。</li>
</ul>
</li>



<li><strong>雕塑策略</strong>：调度器(scheduler)、去噪步数(steps)，采样器(sampler)。其好比雕刻策略，先制定整个雕刻计划(“路线图”)，决定了在整个生成过程需要去除的石料应该如何逐渐减少；再定义执行步骤数量，越多，耗费时间越长，就越细致；最后，基于路线图，在每一步中具体执行雕刻动作，计算并将多余的石料进行雕刻去除。</li>
</ul>



<h2 class="wp-block-heading"><strong>二、其他组件的理解</strong></h2>



<p>理解了核心概念，其余所有组件都可以围绕这些核心概念进行优化或补充，如：</p>



<ul class="wp-block-list">
<li><strong>LoRA：</strong>强化雕塑家的能力，同时对设计需求说明书中的某些概念进行补充说明。</li>



<li><strong>TI：</strong>在设计需求说明书中增加概念并描述概念详细信息。</li>



<li><strong>IPAdapter：</strong>让雕塑家适配能力(由参考图和IPAdapter 模型)来确定。IPAdapter首先用一个CLIP-VISION转换器，将参考图中的视觉信息提取出来，这个转换器的规则记录在CLIP-Vision 模型中。视觉信息提取出来后，再由一个适配器将视觉信息进行筛选并融入unet model中，好比对训练家进行定向训练。</li>



<li><strong>ControlNet：</strong>对设计需求说明书进行约束补充(结构、姿态、构图、空间关系和物理形态等)。CN拿着控制图通过一个转换器(depth,cany类型等)将约束补充融入设计需求说明书中，转换规则记录在CN model中，一般来说，控制图的类型需要和CN model匹配。这个控制图可以通过一个预处理器(preporcessor)从原始空间图像(寻常人眼中的图像中)提取出来，提取规则记录在preprocessor model中。</li>



<li><strong>InstantID：</strong>适配雕塑家的能力，并对设计需求说明书进行补充约束。会将其通过一个人脸识别模块提取人脸信息(face embedding)，通过一个转换器将face embedding融入到unet model中，这个转换规则记录在instantID model中，好比让雕塑家学习该脸部的雕刻方式。不仅如此，提取的脸部信息还会被另一个转换器融入到设计需求说明书中，转换规则记录在一个CN model中，相当于在设计需求说明书中告诉雕塑家这次要画哪张脸。另外：其实也可以看出，其实InstantID分为了3部分，一是人脸信息提取(可以有多种方式，常见如insight face)，二是让雕塑家掌握人脸的雕刻方式(类似IPAdapter)，三是在设计需求说明书中补充该人脸信息(类似Contronet)</li>



<li><strong>SDXL</strong>其实也是在这些核心概念上进行了优化。
<ul class="wp-block-list">
<li>雕塑家 &amp; 原始石料：SD1.5在512 x 512更为擅长，而SDXL在1024 * 1024上能够大展身手，更大的空间意味着更多的细节。</li>



<li>设计需求说明书：SD1.5是通过一个CLIP模型将自然语言转换成设计需求说明书，而SDXL通过两个CLIP模型将其转换成设计需求说明书，能更好的理解语义及空间关系。</li>



<li>雕塑策略：将执行步骤整体分为两段，前段让一个雕塑家（base model）进行雕刻，后段让另一个雕塑家(Refiner model)进行细节打磨。两个雕塑家的擅长偏向不同。</li>



<li>作品去色/上色器：毕竟石料尺寸不通，对VAE也进行的了提升，有更宽广色域和更高精度“笔刷”，它能识别并还原更微妙的色彩层次，让作品的表面光泽和纹理更加逼真，减少“塑料感”。</li>
</ul>
</li>
</ul>



<p>另外，相关参数也可围绕核心概念进行辅助理解</p>



<ul class="wp-block-list">
<li><strong>sampler node > </strong><strong>seed：</strong>原始石料各种各样，而seed好比是石料的存放地址，结合长宽值，便确定具体石料。</li>



<li><strong><strong>sampler node</strong></strong> > <strong>cfg：</strong>雕塑家对设计需求说明书的遵循程度。</li>



<li><strong><strong>sampler node</strong></strong> > <strong>denoise strength：</strong>重绘幅度，文生图中默认为1，图生图中好比在雕刻前先往上面涂泥巴，这样才有创作空间，涂得越多，创作空间越大，和原始图像偏离程度就越高。</li>



<li><strong>other node > start at/end：</strong>具体在哪一步生效，描述组件效果在雕塑策略的具体哪一步执行。</li>



<li>其他都可以套用这种思想去辅助咱们理解。</li>
</ul>



<p></p>
<p><a rel="nofollow" href="https://0to60.top/comfyui%ef%bc%8csd1-5-xl%e5%9f%ba%e7%a1%80%e6%80%bb%e7%bb%93%e7%af%87/">ComfyUI，SD基础总结篇</a>最先出现在<a rel="nofollow" href="https://0to60.top">0260</a>。</p>
]]></content:encoded>
					
					<wfw:commentRss>https://0to60.top/comfyui%ef%bc%8csd1-5-xl%e5%9f%ba%e7%a1%80%e6%80%bb%e7%bb%93%e7%af%87/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>ComfyUI，SD控图入门篇，Controlnet&#038;InstantID</title>
		<link>https://0to60.top/comfyui%ef%bc%8c%e6%8e%a7%e5%9b%be%e5%85%a5%e9%97%a8%e7%af%87%ef%bc%8ccontrolnetinstantid/</link>
					<comments>https://0to60.top/comfyui%ef%bc%8c%e6%8e%a7%e5%9b%be%e5%85%a5%e9%97%a8%e7%af%87%ef%bc%8ccontrolnetinstantid/#respond</comments>
		
		<dc:creator><![CDATA[burson]]></dc:creator>
		<pubDate>Wed, 02 Jul 2025 09:43:14 +0000</pubDate>
				<category><![CDATA[ComfyUI-SD]]></category>
		<category><![CDATA[ComfyUI]]></category>
		<guid isPermaLink="false">https://0to60.top/?p=900</guid>

					<description><![CDATA[<p>引言 Stable Difussio&#46;&#46;&#46;</p>
<p><a rel="nofollow" href="https://0to60.top/comfyui%ef%bc%8c%e6%8e%a7%e5%9b%be%e5%85%a5%e9%97%a8%e7%af%87%ef%bc%8ccontrolnetinstantid/">ComfyUI，SD控图入门篇，Controlnet&amp;InstantID</a>最先出现在<a rel="nofollow" href="https://0to60.top">0260</a>。</p>
]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading">引言</h2>



<p>Stable Difussion的作用主要是生图，而图片之所以能产生价值，一定是符合应用落地的某些需求，这就要求咱们从”随机生成”逐步迈向”精准控制”。</p>



<p>目前在comfyUI中，大致有这么几个主要方式可以进行控图。</p>



<ul class="wp-block-list">
<li>模型本身，提示词，LoRA，TIIPAdapter</li>



<li>IPAdapter</li>



<li>ControlNet &amp; InstantID</li>
</ul>



<p>本文主要就ControlNet和InstantID进行总结</p>



<h2 class="wp-block-heading">一、ControlNet</h2>



<h3 class="wp-block-heading">1.1 初步认识ControlNet</h3>



<figure class="wp-block-riovizual-tablebuilder is-style-stripes rv_tb-ae9b4e97-0115-44b9-8b42-fea7eda3dbdd is-scroll-on-mobile" rv-tb-responsive-breakpoint="768px"><table class=""><thead><tr><th class="rv_tb-cell rv_tb-row-0-cell-0 rv_tb-rs-row-0-cell-0 rv_tb-cs-row-0-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">维度</div></div></div></th><th class="rv_tb-cell rv_tb-row-0-cell-1 rv_tb-rs-row-0-cell-1 rv_tb-cs-row-0-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">描述</div></div></div></th></tr></thead><tbody><tr><td class="rv_tb-cell rv_tb-row-1-cell-0 rv_tb-rs-row-1-cell-0 rv_tb-cs-row-1-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">定位</div></div></div></td><td class="rv_tb-cell rv_tb-row-1-cell-1 rv_tb-rs-row-1-cell-1 rv_tb-cs-row-1-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">如果IP-Adapter是将图片中的风格、身份特征的视觉信息进行迁移，那么ControNet就是能从参考图中，用各种专业的分析工具(预处理器)提取更多不同维度、不同层级的视觉信息，然后增强设计需求说明文档(promt embedding)，从而并引导雕塑家（Unet model）雕塑创作。</div></div></div></td></tr><tr><td class="rv_tb-cell rv_tb-row-2-cell-0 rv_tb-rs-row-2-cell-0 rv_tb-cs-row-2-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">输入/输出</div></div></div></td><td class="rv_tb-cell rv_tb-row-2-cell-1 rv_tb-rs-row-2-cell-1 rv_tb-cs-row-2-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">输入</div></div></div><div class="rv_tb-element"><div class="rv_tb-list-wrap rv_justify cell-element-1"><ul class="rv_tb-list"><li>Prompt Embedding(NEG+POS)：CLIP FOR TEXT PROMPT的输出</li><li>Control Image：引导控制图，用于约束引导雕塑过程。</li><li>ControlNet Model：将引导控制图转换成赛博雕塑世界能识别的引导手册，并给到雕塑家在雕塑过程中用于参考。</li></ul></div></div><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-2"><div class="rv_tb-text">输出</div></div></div><div class="rv_tb-element"><div class="rv_tb-list-wrap rv_justify cell-element-3"><ul class="rv_tb-list"><li>Prompt Embedding：被增强过的设计需求说明文档(CLIP FOR TEXT PROMPT的输出)，用于引导雕塑家(Unet Model)进行雕刻。</li></ul></div></div></td></tr><tr><td class="rv_tb-cell rv_tb-row-3-cell-0 rv_tb-rs-row-3-cell-0 rv_tb-cs-row-3-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">预处理器 (Preprocessor)</div></div></div></td><td class="rv_tb-cell rv_tb-row-3-cell-1 rv_tb-rs-row-3-cell-1 rv_tb-cs-row-3-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">ControlNet 并没有直接理解像素图像，它首先需要一个一些分析工具。这个分析工具会分析你的原始图像，并根据你选择的 ControlNet 类型，从中提取出非常具体的、结构化的“条件图像”。条件图像大致分为以下类型</div></div></div><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-1"><div class="rv_tb-text"><strong>结构与轮廓</strong></div></div></div><div class="rv_tb-element"><div class="rv_tb-list-wrap rv_justify cell-element-2"><ul class="rv_tb-list"><li><strong>Canny</strong>(边缘)：提取图像中最清晰、最锐利的边缘线。它对图像的明暗交界、颜色突变非常敏感。<strong>适用场景：</strong>需要严格保持物体形状、建筑结构、人物轮廓的精确性。</li><li><strong>Lineart</strong>(线条艺术)：提取更具艺术感、更像手绘线稿的线条，它通常能处理主线和细节线。<strong>适用场景：</strong>将照片转绘成漫画、插画、日系动漫线条风格。</li><li><strong>Softedge / HED / PIDS</strong> (柔和边缘）：提取更柔和、更艺术化、更不那么锐利的边缘信息，有时也能捕捉到物体间的微妙边界和阴影过渡。<strong>适用场景：</strong>希望保持大致构图，但又想给 AI 更多自由度去填充细节和风格时；避免线条过硬而影响融合。</li></ul></div></div><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-3"><div class="rv_tb-text"><strong>姿态与骨架</strong></div></div></div><div class="rv_tb-element"><div class="rv_tb-list-wrap rv_justify cell-element-4"><ul class="rv_tb-list"><li><strong>OpenPose </strong>(人体骨架）：精确检测人物的骨骼关键点、身体姿态、肢体朝向，以及手部和面部（眼睛、鼻子、嘴巴等）的关键点。<strong>适用场景：</strong> 精准控制人物的动作、站姿、手势、身体朝向等，而完全不限制人物的外貌、服装和风格。</li></ul></div></div><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-5"><div class="rv_tb-text"><strong>构图与空间关系</strong></div></div></div><div class="rv_tb-element"><div class="rv_tb-list-wrap rv_justify cell-element-6"><ul class="rv_tb-list"><li><strong>Depth</strong> (深度图）：检测图像中物体到相机的距离信息。近的物体显示为白色/亮色，远的物体显示为黑色/暗色，形成一张灰度图。<strong>适用场景：</strong>保持原图的透视关系、场景景深、物体间的远近布局。（PS：我感觉用来控制人物姿态很合适）</li><li><strong>Normal Map</strong> (法线图）：检测图像中物体表面的朝向信息。它用 RGB 颜色编码了物体表面在三维空间中法线（垂直于表面的方向）的 X、Y、Z 分量。<strong>适用场景：</strong> 精确引导图像的光照效果、阴影分布、物体表面的立体感和材质细节。</li><li><strong>MLSD </strong>(直线检测）：专门检测图像中所有直线结构，例如建筑、房间、家具的边缘。<strong>适用场景：</strong> 保持建筑效果图、室内设计图、几何结构图像的规整性和精确性。</li></ul></div></div><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-7"><div class="rv_tb-text"><strong>内容与语义</strong></div></div></div><div class="rv_tb-element"><div class="rv_tb-list-wrap rv_justify cell-element-8"><ul class="rv_tb-list"><li><strong>Segmentation </strong>(语义分割）：将图像中的不同物体或区域进行语义上的划分（例如识别出“人”、“车”、“树”、“天空”、“道路”等），并用不同的颜色或标签进行标注。<strong>适用场景：</strong> 根据区域填充内容，例如将“蓝色区域”替换为天空，“红色区域”替换为建筑。</li><li><strong>Shuffle (像素混洗）</strong>：将参考图像的像素特征打乱重排，然后作为条件注入。它不提取具体的结构，而是将原图的整体视觉特征（如色彩、质感、部分抽象内容）以混洗的方式传递。<strong>适用场景：</strong> 粗略地借鉴参考图的整体构图、色彩和风格，同时给予 AI 很大的自由度进行内容重构。</li></ul></div></div></td></tr><tr><td class="rv_tb-cell rv_tb-row-4-cell-0 rv_tb-rs-row-4-cell-0 rv_tb-cs-row-4-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">CN 模型<br>(ControNet Model)</div></div></div></td><td class="rv_tb-cell rv_tb-row-4-cell-1 rv_tb-rs-row-4-cell-1 rv_tb-cs-row-4-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">接收条件图像，并将其转换成赛博雕塑世界能理解的参考图，并以该参考图约束引导雕塑家的雕刻过程。</div></div></div><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-1"><div class="rv_tb-text">通常来说，不同的条件图像，需要用不同的CN模型来处理。</div></div></div><div class="rv_tb-element"><div class="rv_tb-list-wrap rv_justify cell-element-2"><ul class="rv_tb-list"><li>如cany则对应cany</li></ul></div></div><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-3"><div class="rv_tb-text">一般的，CN模型需要对应相同版本的的Unet模型</div></div></div><div class="rv_tb-element"><div class="rv_tb-list-wrap rv_justify cell-element-4"><ul class="rv_tb-list"><li>sd15则对应sd15</li><li>SDXL则对应sdxl</li></ul></div></div></td></tr></tbody></table></figure>



<h3 class="wp-block-heading">1.2 CN基本工作流(depth)图示</h3>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1383" height="841" src="https://0to60.top/wp-content/uploads/2025/07/image-1.png" alt="" class="wp-image-906" srcset="https://0to60.top/wp-content/uploads/2025/07/image-1.png 1383w, https://0to60.top/wp-content/uploads/2025/07/image-1-300x182.png 300w, https://0to60.top/wp-content/uploads/2025/07/image-1-1024x623.png 1024w, https://0to60.top/wp-content/uploads/2025/07/image-1-768x467.png 768w" sizes="auto, (max-width: 1383px) 100vw, 1383px" /></figure>



<h3 class="wp-block-heading">1.3 基本节点介绍&amp;参数理解</h3>



<figure class="wp-block-riovizual-tablebuilder is-style-stripes rv_tb-d11cfbe5-c1bd-443c-babe-aa8da3871fee is-scroll-on-mobile" rv-tb-responsive-breakpoint="768px"><table class=""><tbody><tr><th class="rv_tb-cell rv_tb-row-0-cell-0 rv_tb-rs-row-0-cell-0 rv_tb-cs-row-0-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">节点名称</div></div></div></th><th class="rv_tb-cell rv_tb-row-0-cell-1 rv_tb-rs-row-0-cell-1 rv_tb-cs-row-0-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">描述</div></div></div></th></tr><tr><td class="rv_tb-cell rv_tb-row-1-cell-0 rv_tb-rs-row-1-cell-0 rv_tb-cs-row-1-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">Apply Controlnet</div></div></div></td><td class="rv_tb-cell rv_tb-row-1-cell-1 rv_tb-rs-row-1-cell-1 rv_tb-cs-row-1-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">CN的核心节点，整体作用是对prompt embeddng(pos+neg)进行增强，可以理解为将控制信息添加到prompt embedding(pos+neg)中，就好比在设计需求说明文档中添加各种维度的约束信息</div></div></div><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-1"><div class="rv_tb-text"><strong>核心输入</strong></div></div></div><div class="rv_tb-element"><div class="rv_tb-list-wrap rv_justify cell-element-2"><ul class="rv_tb-list"><li>positive：即positive prompt embedding(condition+,condition positive)</li><li>negative：即negative prompt embedding(condition-,condition negative)，两者好比就是把设计需求说明书传入，进行增强补充。</li><li>image：即contronet image，如果要使用参考图(常规人眼看的图)中的控制信息(如结构，风格，光影，姿势)，通常需要一个转换工具preprocessor进行提前转换。如图中depth控制图。</li><li>Control_net：即Controlnet model，其作用就是将控制图中的信息解析并转到到prompt embedding中，好比就是从controlnet img提取出来的控制信息补充添加到设计需求说明书中。</li><li><em>vae：说是能提供对控制信息的编码和解码，从而提高控制信息的质量和准确性，暂无明显体感。</em></li><li><em>model：有些节点会要求传入unetmodel，仅用于判断CN model 和 model是否匹配。一般来说unet model 与 CN model 要求匹配。SD15匹配SD15,SDXL 匹配SDXL。</em></li></ul></div></div><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-3"><div class="rv_tb-text"><strong>额外控制参数</strong></div></div></div><div class="rv_tb-element"><div class="rv_tb-list-wrap rv_justify cell-element-4"><ul class="rv_tb-list"><li>strength：对控制信息的遵循程度，默认为1.</li><li>start_percent：决定该控制信息什么时间点生效。</li><li>end_percent：决定该控制信息什么时间点结束。</li></ul></div></div><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-5"><div class="rv_tb-text"><strong>核心输出</strong></div></div></div><div class="rv_tb-element"><div class="rv_tb-list-wrap rv_justify cell-element-6"><ul class="rv_tb-list"><li>positive&amp;negative：即 prompt embedding(condition+,condition positive)，也可以看做是被增强补充控制信息的设计需求说明书。</li></ul></div></div></td></tr><tr><td class="rv_tb-cell rv_tb-row-2-cell-0 rv_tb-rs-row-2-cell-0 rv_tb-cs-row-2-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">AIO AUX Preporcessor</div></div></div></td><td class="rv_tb-cell rv_tb-row-2-cell-1 rv_tb-rs-row-2-cell-1 rv_tb-cs-row-2-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">综合的control image处理器</div></div></div><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-1"><div class="rv_tb-text"><strong>核心输入</strong></div></div></div><div class="rv_tb-element"><div class="rv_tb-list-wrap rv_justify cell-element-2"><ul class="rv_tb-list"><li>image：即正常的像素图片，需要将其转换成control img。</li><li>Preprcessor  model：选择preprocessor的类型，不同类型的model会生成不同的control img</li><li>resolusion：分辨率，默认512。个人喜欢直接设置为latent width。</li></ul></div></div><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-3"><div class="rv_tb-text"><strong>核心输出</strong></div></div></div><div class="rv_tb-element"><div class="rv_tb-list-wrap rv_justify cell-element-4"><ul class="rv_tb-list"><li>image：即control image，作为Apply Controlnet节点的核心输入之一。</li></ul></div></div></td></tr><tr><td class="rv_tb-cell rv_tb-row-3-cell-0 rv_tb-rs-row-3-cell-0 rv_tb-cs-row-3-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">Load Controlnet Model</div></div></div></td><td class="rv_tb-cell rv_tb-row-3-cell-1 rv_tb-rs-row-3-cell-1 rv_tb-cs-row-3-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">CN MODEL加载器，有两个要求</div></div></div><div class="rv_tb-element"><div class="rv_tb-list-wrap rv_justify cell-element-1"><ul class="rv_tb-list"><li>不同类型的控制图需要对应的CN model</li><li>不同类型的unet mdoel 需要对应的 CN model，如SD15的unet model则需要SD15的CN model，SDXL等类推</li></ul></div></div></td></tr></tbody></table></figure>



<h2 class="wp-block-heading">二、InstantID</h2>



<h3 class="wp-block-heading"><strong>2.1 初步了解InstantID</strong></h3>



<p>在视觉信息中，各种维度(结构，线条，笔触，颜色，光影等)都可以通过Controlnet进行控制，只要达到大致相当我们便能接受，但当咱们需要高度保持原图中的信息时，通常达不到我们的期望，如风景还原度，人物还原度，又或是其他。</p>



<p>而InstantID则通过还原脸部信息，从而提高人物还原度。</p>



<p>重要：截止更新时间为止，InstantID适用于SDXL版本</p>



<h3 class="wp-block-heading">2.2 工作流图示</h3>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1724" height="1019" src="https://0to60.top/wp-content/uploads/2025/07/image-4.png" alt="" class="wp-image-921" srcset="https://0to60.top/wp-content/uploads/2025/07/image-4.png 1724w, https://0to60.top/wp-content/uploads/2025/07/image-4-300x177.png 300w, https://0to60.top/wp-content/uploads/2025/07/image-4-1024x605.png 1024w, https://0to60.top/wp-content/uploads/2025/07/image-4-768x454.png 768w, https://0to60.top/wp-content/uploads/2025/07/image-4-1536x908.png 1536w" sizes="auto, (max-width: 1724px) 100vw, 1724px" /></figure>



<h3 class="wp-block-heading">2.3基本节点介绍&amp;参数理解</h3>



<figure class="wp-block-riovizual-tablebuilder is-style-stripes rv_tb-e7bbd035-3679-4a8a-84ba-8bcb410293d1 is-scroll-on-mobile" rv-tb-responsive-breakpoint="768px"><table class=""><thead><tr><th class="rv_tb-cell rv_tb-row-0-cell-0 rv_tb-rs-row-0-cell-0 rv_tb-cs-row-0-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">节点名称</div></div></div></th><th class="rv_tb-cell rv_tb-row-0-cell-1 rv_tb-rs-row-0-cell-1 rv_tb-cs-row-0-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">描述</div></div></div></th></tr></thead><tbody><tr><td class="rv_tb-cell rv_tb-row-1-cell-0 rv_tb-rs-row-1-cell-0 rv_tb-cs-row-1-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">InstantID Patch Attention</div></div></div></td><td class="rv_tb-cell rv_tb-row-1-cell-1 rv_tb-rs-row-1-cell-1 rv_tb-cs-row-1-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">用于提取脸部信息，并与UNET Model进行适配，好比提取了脸部特征信息后，让雕塑家进行学习，从而获得画这张脸的能力。<br><strong>核心输入</strong></div></div></div><div class="rv_tb-element"><div class="rv_tb-list-wrap rv_justify cell-element-1"><ul class="rv_tb-list"><li>image：脸部参考图，建议只用脸部区域的图。</li><li>insightface：识别人脸信息，提取face embeds</li><li>model：即unet model</li><li>instantid：即传入instantid model，该model的作用是将提取后的人脸信息注入到unet model中，好比让雕塑家学习这个人脸过程，具备画该脸的能力。注意：只是多了个能力，而不是只有画该脸的能力</li></ul></div></div><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-2"><div class="rv_tb-text"><strong>额外控制参数</strong></div></div></div><div class="rv_tb-element"><div class="rv_tb-list-wrap rv_justify cell-element-3"><ul class="rv_tb-list"><li>ip_weight：可以理解为unet model的学习力度。</li><li>start/end_at：开始调用该能力的时间段。</li><li>noise：官方指明不加noise可能会有毁图，可能类图生图重绘，加上点泥巴后，才能更好的应用该能力，更丝滑一点。建议0.35以下，作者一般用0.2。</li></ul></div></div><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-4"><div class="rv_tb-text"><strong>输出</strong></div></div></div><div class="rv_tb-element"><div class="rv_tb-list-wrap rv_justify cell-element-5"><ul class="rv_tb-list"><li>model：适配过脸部信息的unetmodel，好比已经学习过该脸部信息的雕塑家。</li><li>face embeds：脸部特征信息</li></ul></div></div></td></tr><tr><td class="rv_tb-cell rv_tb-row-2-cell-0 rv_tb-rs-row-2-cell-0 rv_tb-cs-row-2-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">Load InstantID Model</div></div></div></td><td class="rv_tb-cell rv_tb-row-2-cell-1 rv_tb-rs-row-2-cell-1 rv_tb-cs-row-2-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">加载instantID model，官方指定了一个模型，名为ip_adpter.bin，至于为什么名为ip_adpter，说是基于IPAdapter。<br>作为InstantID Patch Attention节点的核心输入之一。</div></div></div></td></tr><tr><td class="rv_tb-cell rv_tb-row-3-cell-0 rv_tb-rs-row-3-cell-0 rv_tb-cs-row-3-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">InstantID Face Analysis</div></div></div></td><td class="rv_tb-cell rv_tb-row-3-cell-1 rv_tb-rs-row-3-cell-1 rv_tb-cs-row-3-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">提取脸部特征的模型，专用于识别并提取脸部信息，需按照github中的指引进行下载并放置在指定路径。<br>作为作为InstantID Patch Attention节点的核心输入之一。</div></div></div></td></tr><tr><td class="rv_tb-cell rv_tb-row-4-cell-0 rv_tb-rs-row-4-cell-0 rv_tb-cs-row-4-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">InstantID Apply ControlNet</div></div></div></td><td class="rv_tb-cell rv_tb-row-4-cell-1 rv_tb-rs-row-4-cell-1 rv_tb-cs-row-4-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">将face embeds(脸部信息)和脸部姿态(通过image_kps预处理后获得)增强补充到prompt embedding(设计需求说明书)中,告诉unet model我们要雕刻谁。实验过程中，我有这么一种理解，instantID是让雕塑家学习这张脸，但是雕塑家只是多了这么一张脸的信息，而真正雕刻时，雕刻什么样的脸是由设计需求说明书来确定。<br><strong>核心输入</strong></div></div></div><div class="rv_tb-element"><div class="rv_tb-list-wrap rv_justify cell-element-1"><ul class="rv_tb-list"><li>face embeds:脸部信息</li><li>prompt embedding（pos+neg）：即设计需求说明书。</li><li>image_kps：脸部姿态关键点（眼鼻嘴）参考图像。</li><li>Controlnet：cn model，她能在设计需求说明书中奖脸部特征和脸部姿态等信息增强补充。</li></ul></div></div><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-2"><div class="rv_tb-text"><strong>输出</strong></div></div></div><div class="rv_tb-element"><div class="rv_tb-list-wrap rv_justify cell-element-3"><ul class="rv_tb-list"><li>prompt embedding（pos+neg）：被补充增强的设计需求说明书，告诉unet model要画哪张脸，脸部姿态是怎么样的。</li></ul></div></div></td></tr><tr><td class="rv_tb-cell rv_tb-row-5-cell-0 rv_tb-rs-row-5-cell-0 rv_tb-cs-row-5-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">Load ControlNet Model</div></div></div></td><td class="rv_tb-cell rv_tb-row-5-cell-1 rv_tb-rs-row-5-cell-1 rv_tb-cs-row-5-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">加载Controlnet模型。</div></div></div></td></tr></tbody></table></figure>
<p><a rel="nofollow" href="https://0to60.top/comfyui%ef%bc%8c%e6%8e%a7%e5%9b%be%e5%85%a5%e9%97%a8%e7%af%87%ef%bc%8ccontrolnetinstantid/">ComfyUI，SD控图入门篇，Controlnet&amp;InstantID</a>最先出现在<a rel="nofollow" href="https://0to60.top">0260</a>。</p>
]]></content:encoded>
					
					<wfw:commentRss>https://0to60.top/comfyui%ef%bc%8c%e6%8e%a7%e5%9b%be%e5%85%a5%e9%97%a8%e7%af%87%ef%bc%8ccontrolnetinstantid/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>ComfyUI，SD控图入门篇，IPAdapter</title>
		<link>https://0to60.top/comfyui%ef%bc%8c%e6%8e%a7%e5%9b%be%e7%af%87-%e5%9b%be%e5%83%8f%e6%9d%a1%e4%bb%b6%e5%8f%af%e6%8e%a7-ipadapter/</link>
					<comments>https://0to60.top/comfyui%ef%bc%8c%e6%8e%a7%e5%9b%be%e7%af%87-%e5%9b%be%e5%83%8f%e6%9d%a1%e4%bb%b6%e5%8f%af%e6%8e%a7-ipadapter/#respond</comments>
		
		<dc:creator><![CDATA[burson]]></dc:creator>
		<pubDate>Tue, 24 Jun 2025 07:14:54 +0000</pubDate>
				<category><![CDATA[ComfyUI-SD]]></category>
		<category><![CDATA[ComfyUI]]></category>
		<guid isPermaLink="false">https://0to60.top/?p=877</guid>

					<description><![CDATA[<p>引言 Stable Difussio&#46;&#46;&#46;</p>
<p><a rel="nofollow" href="https://0to60.top/comfyui%ef%bc%8c%e6%8e%a7%e5%9b%be%e7%af%87-%e5%9b%be%e5%83%8f%e6%9d%a1%e4%bb%b6%e5%8f%af%e6%8e%a7-ipadapter/">ComfyUI，SD控图入门篇，IPAdapter</a>最先出现在<a rel="nofollow" href="https://0to60.top">0260</a>。</p>
]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading">引言</h2>



<p>Stable Difussion的作用主要是生图，而图片之所以能产生价值，一定是符合应用落地的某些需求，这就要求咱们从&#8221;随机生成&#8221;逐步迈向&#8221;精准控制&#8221;。</p>



<p>目前在comfyUI中，大致有这么几个主要方式可以进行控图。</p>



<ul class="wp-block-list">
<li>模型本身，提示词，LoRA，TI</li>



<li>IPAdapter</li>



<li>ControlNet &amp; InstantID</li>
</ul>



<p>本文主要就图像条件可控-IPAdapter进行总结。</p>



<h2 class="wp-block-heading">一、初步认识IPAdapter</h2>



<figure class="wp-block-riovizual-tablebuilder is-style-stripes rv_tb-2599dc76-e1e6-4843-8154-9805fa8ae2ae is-scroll-on-mobile" rv-tb-responsive-breakpoint="768px"><table class=""><thead><tr><th class="rv_tb-cell rv_tb-row-0-cell-0 rv_tb-rs-row-0-cell-0 rv_tb-cs-row-0-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">维度</div></div></div></th><th class="rv_tb-cell rv_tb-row-0-cell-1 rv_tb-rs-row-0-cell-1 rv_tb-cs-row-0-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">描述</div></div></div></th></tr></thead><tbody><tr><td class="rv_tb-cell rv_tb-row-1-cell-0 rv_tb-rs-row-1-cell-0 rv_tb-cs-row-1-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">定位</div></div></div></td><td class="rv_tb-cell rv_tb-row-1-cell-1 rv_tb-rs-row-1-cell-1 rv_tb-cs-row-1-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">图像风格/内容转化师。它通过图像来影响 AI 绘画的风格、身份或视觉元素。<br>理解全名，辅助理解其作用Image Prompt Adapter。</div></div></div></td></tr><tr><td class="rv_tb-cell rv_tb-row-2-cell-0 rv_tb-rs-row-2-cell-0 rv_tb-cs-row-2-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">输入/输出</div></div></div></td><td class="rv_tb-cell rv_tb-row-2-cell-1 rv_tb-rs-row-2-cell-1 rv_tb-cs-row-2-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">输入</div></div></div><div class="rv_tb-element"><div class="rv_tb-list-wrap rv_justify cell-element-1"><ul class="rv_tb-list"><li><strong>图像提示词 (Image)</strong>：你想提取风格或特征的参考图片</li><li><strong>CLIP图像编码器</strong>（CLIP Vision Encoder）：（内置或需要选择编码器类型）将人类肉眼能看见的图片信息转化成赛博雕塑世界能理解的信息，可能包含：物体概念、行为概念、色彩搭配、笔触、光影特点、艺术流派、边缘、纹理、基本形状甚至人物身份信息等。</li><li><strong>模型 (MODEL)</strong>：基础的 Stable Diffusion 模型 (UNet)，好比一个雕塑家。</li><li><strong>IPAdapter 模型 (IPAdapter Model)</strong>：IP-Adapter 自身的权重文件，负责转换和注入图像特征。好比定向训练官，拿着转化好的图片信息，定向筛选(风格 或 脸部信息)让雕塑家进行学习。</li></ul></div></div><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-2"><div class="rv_tb-text">输出</div></div></div><div class="rv_tb-element"><div class="rv_tb-list-wrap rv_justify cell-element-3"><ul class="rv_tb-list"><li>一个<strong>被适配过的 MODEL</strong>，即被图像引导能力“增强”了的 UNet 模型。好比接收过定向参考训练的雕塑家。</li></ul></div></div></td></tr><tr><td class="rv_tb-cell rv_tb-row-3-cell-0 rv_tb-rs-row-3-cell-0 rv_tb-cs-row-3-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">作用点</div></div></div></td><td class="rv_tb-cell rv_tb-row-3-cell-1 rv_tb-rs-row-3-cell-1 rv_tb-cs-row-3-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">主要作用于 <strong>UNet 模型</strong>的注意力层，让 UNet 在去噪过程中能“看到”并融合图像的视觉特征。</div></div></div></td></tr><tr><td class="rv_tb-cell rv_tb-row-4-cell-0 rv_tb-rs-row-4-cell-0 rv_tb-cs-row-4-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">与pormpt关系</div></div></div></td><td class="rv_tb-cell rv_tb-row-4-cell-1 rv_tb-rs-row-4-cell-1 rv_tb-cs-row-4-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">与文本 Prompt <strong>并行工作</strong>，Prompt 定义“什么”，IP-Adapter 定义“像什么”</div></div></div></td></tr></tbody></table></figure>



<h2 class="wp-block-heading">二、IPAdapter基本工作流</h2>



<h3 class="wp-block-heading">2.1 工作流图示</h3>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1015" height="990" src="https://0to60.top/wp-content/uploads/2025/06/image-21.png" alt="" class="wp-image-885" srcset="https://0to60.top/wp-content/uploads/2025/06/image-21.png 1015w, https://0to60.top/wp-content/uploads/2025/06/image-21-300x293.png 300w, https://0to60.top/wp-content/uploads/2025/06/image-21-768x749.png 768w" sizes="auto, (max-width: 1015px) 100vw, 1015px" /></figure>



<h3 class="wp-block-heading">2.2 基本节点介绍及参数理解</h3>



<figure class="wp-block-riovizual-tablebuilder is-style-stripes rv_tb-4742b839-54e3-433d-ac2c-9f87ff87efd2 is-scroll-on-mobile" rv-tb-responsive-breakpoint="768px"><table class=""><thead><tr><th class="rv_tb-cell rv_tb-row-0-cell-0 rv_tb-rs-row-0-cell-0 rv_tb-cs-row-0-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">节点名称</div></div></div></th><th class="rv_tb-cell rv_tb-row-0-cell-1 rv_tb-rs-row-0-cell-1 rv_tb-cs-row-0-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">描述</div></div></div></th></tr></thead><tbody><tr><td class="rv_tb-cell rv_tb-row-1-cell-0 rv_tb-rs-row-1-cell-0 rv_tb-cs-row-1-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">IPAapter Advanced</div></div></div></td><td class="rv_tb-cell rv_tb-row-1-cell-1 rv_tb-rs-row-1-cell-1 rv_tb-cs-row-1-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">好比一个训练场<br>核心输入</div></div></div><div class="rv_tb-element"><div class="rv_tb-list-wrap rv_justify cell-element-1"><ul class="rv_tb-list"><li>img（参考图片）</li><li>CLIP Vision Encoder(图像编码器)：它可以将图片转为为embedding。好比将人类世界的图片转换为赛博雕塑世界能理解的视觉信息。</li><li>model：unet模型，好比雕塑家。</li><li>IPAdapter：IPAdapter模型，好比一个定向训练官，筛选视觉信息中需要训练的部分，然后让雕塑家进行学习。</li></ul></div></div><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-2"><div class="rv_tb-text">额外参数输入</div></div></div><div class="rv_tb-element"><div class="rv_tb-list-wrap rv_justify cell-element-3"><ul class="rv_tb-list"><li>weight（影响权重）：控制图片风格对生成结果的总体影响强度，可以理解成雕塑家的遵循程度，0.5-0.8(平衡)</li><li>weight type（影响权重策略）：在start at和start end间，weight的变化方式，如linear，就是从0到指定权重，如ease in，就是开始的时候低，后续强，依次类推。</li><li>noise（图像embedding噪声）：在对人类世界的图片进行视觉信息提取后，会在此基础上增加微量的噪声随机扰动，目的是保持整体风格的基础上，让细节表现有所不同，一般0.2以下。</li><li>start_at / end_at (指令生效时间段)：控制去噪过程中什么时候开始/结束参照风格，由于去噪过程中，前面时间通常是构图，后续增加细节，而参考的图片中也会有构图信息，为了防止构图过程中采纳风格图片，可以将开始设置延后。</li><li>embeds_scaling：图像引导的“适应性缩放”，可以理解成图像embedding的是筛选融合规则。一般选V，更多需探索。</li></ul></div></div></td></tr><tr><td class="rv_tb-cell rv_tb-row-2-cell-0 rv_tb-rs-row-2-cell-0 rv_tb-cs-row-2-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">Load CLIP VISION</div></div></div></td><td class="rv_tb-cell rv_tb-row-2-cell-1 rv_tb-rs-row-2-cell-1 rv_tb-cs-row-2-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">加载CLIP Vision Encoder，需要选择 CLIP Vsion Encoder，作为IPAapter Advanced的核心输入之一。</div></div></div><div class="rv_tb-element"><div class="rv_tb-list-wrap rv_justify cell-element-1"><ul class="rv_tb-list"><li>ViT-L/14(Vision Transformer, Large, Patch size 14)，示例文件：clip-vit-large-patch14.safetensors，clip-vit-l-14.ckpt</li><li>ViT-H/14(Vision Transformer, Huge, Patch size 14)，示例文件：clip-vit-huge-patch14.safetensors，ViT-H.safetensors</li><li>ViT-G/14(Vision Transformer, Giant, Patch size 14)，示例文件：clip_g.safetensors，ViT-g-14-laion2B-s12B-b42K.safetensors</li></ul></div></div><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-2"><div class="rv_tb-text">原则上G>H>L,越大能识别的信息更多。</div></div></div></td></tr><tr><td class="rv_tb-cell rv_tb-row-3-cell-0 rv_tb-rs-row-3-cell-0 rv_tb-cs-row-3-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">Load Image</div></div></div></td><td class="rv_tb-cell rv_tb-row-3-cell-1 rv_tb-rs-row-3-cell-1 rv_tb-cs-row-3-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">参考风格图片，通过CLIP Vision Encoder将其提取成视觉信息，作为IPAapter Advanced的核心输入之一。</div></div></div></td></tr><tr><td class="rv_tb-cell rv_tb-row-4-cell-0 rv_tb-rs-row-4-cell-0 rv_tb-cs-row-4-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">IPAdapter Model Loader</div></div></div></td><td class="rv_tb-cell rv_tb-row-4-cell-1 rv_tb-rs-row-4-cell-1 rv_tb-cs-row-4-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">定向训练官，用于将视觉信息定向筛选并让雕塑家(模型)学习，作为IPAapter Advanced的核心输入之一。</div></div></div><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-1"><div class="rv_tb-text">值得一提的是，既然是从转换后的视觉信息中筛选，很大可能IPAdapter Model和CLIP Vision Encoder有匹配关系，事实也确实如此。<br>命名形如*-h的模型通常对应ViT/H<br>命名形如*-g的模型通常对应ViT/G<br>其他依次类推，无特殊信息的话，如果报错，可以先尝试用ViT/H的视觉编码器。</div></div></div></td></tr><tr><td class="rv_tb-cell rv_tb-row-5-cell-0 rv_tb-rs-row-5-cell-0 rv_tb-cs-row-5-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">Rescale_CFG</div></div></div></td><td class="rv_tb-cell rv_tb-row-5-cell-1 rv_tb-rs-row-5-cell-1 rv_tb-cs-row-5-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text"><strong>控制最终所有条件引导强度</strong>的核心参数。它通过校准 Prompt 和 Adapter 共同产生的“合力引导方向”的强度，来<strong>平滑高 CFG Scale 和强 Adapter 可能带来的过度引导问题</strong>，减少伪影，使图像更自然、美观。<br>同时，在某种意义上也能达到这种效果：</div></div></div><div class="rv_tb-element"><div class="rv_tb-list-wrap rv_justify cell-element-1"><ul class="rv_tb-list"><li>越低，模型自由发挥空间越高，IPAdapter的引导相对变弱。</li><li>越高，模型自由发挥空间越低，IPAdapter的引导相对变强。</li></ul></div></div></td></tr></tbody></table></figure>



<h2 class="wp-block-heading">三、IPAdapter-FaceID</h2>



<p>在生成图像时，除了参考风格外，常常还有这么一个场景，即：生成的作品中，如何精确地保持人物的脸部身份和特征</p>



<h3 class="wp-block-heading">3.1 基本工作流</h3>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1683" height="950" src="https://0to60.top/wp-content/uploads/2025/06/image-23.png" alt="" class="wp-image-890" srcset="https://0to60.top/wp-content/uploads/2025/06/image-23.png 1683w, https://0to60.top/wp-content/uploads/2025/06/image-23-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/06/image-23-1024x578.png 1024w, https://0to60.top/wp-content/uploads/2025/06/image-23-768x434.png 768w, https://0to60.top/wp-content/uploads/2025/06/image-23-1536x867.png 1536w, https://0to60.top/wp-content/uploads/2025/06/image-23-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/06/image-23-700x394.png 700w" sizes="auto, (max-width: 1683px) 100vw, 1683px" /></figure>



<h3 class="wp-block-heading">3.2 基本节点和参数理解</h3>



<figure class="wp-block-riovizual-tablebuilder is-style-stripes rv_tb-d41364a6-d8fe-4d92-a3ec-e5fab96ac11c is-scroll-on-mobile" rv-tb-responsive-breakpoint="768px"><table class=""><thead><tr><th class="rv_tb-cell rv_tb-row-0-cell-0 rv_tb-rs-row-0-cell-0 rv_tb-cs-row-0-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">节点</div></div></div></th><th class="rv_tb-cell rv_tb-row-0-cell-1 rv_tb-rs-row-0-cell-1 rv_tb-cs-row-0-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">描述</div></div></div></th></tr></thead><tbody><tr><td class="rv_tb-cell rv_tb-row-1-cell-0 rv_tb-rs-row-1-cell-0 rv_tb-cs-row-1-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">IPAdapter FaceID</div></div></div></td><td class="rv_tb-cell rv_tb-row-1-cell-1 rv_tb-rs-row-1-cell-1 rv_tb-cs-row-1-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">与IPAdapter Advanced类似，输入有：</div></div></div><div class="rv_tb-element"><div class="rv_tb-list-wrap rv_justify cell-element-1"><ul class="rv_tb-list"><li>imge，Model略</li><li>CLIP Vision Encoder略</li><li>IPAdapter模型，这其中，就有FaceID类型，又可以迁移风格，又能保留脸部特征。</li><li>与FaceID配套的LoRA：据官方要求，使用FaceID的时候，一般要配合一个指定的LoRA，命名形如IPAdapter模型_lora。</li><li>Insightface：在保留脸部特征过程中，使用了insightface的能力，所以还需指定insightface。</li></ul></div></div></td></tr><tr><td class="rv_tb-cell rv_tb-row-2-cell-0 rv_tb-rs-row-2-cell-0 rv_tb-cs-row-2-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">IPAdapter Model Loader</div></div></div></td><td class="rv_tb-cell rv_tb-row-2-cell-1 rv_tb-rs-row-2-cell-1 rv_tb-cs-row-2-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">略，同基本工作流</div></div></div></td></tr><tr><td class="rv_tb-cell rv_tb-row-3-cell-0 rv_tb-rs-row-3-cell-0 rv_tb-cs-row-3-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">Load Image</div></div></div></td><td class="rv_tb-cell rv_tb-row-3-cell-1 rv_tb-rs-row-3-cell-1 rv_tb-cs-row-3-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">略，同基本工作流</div></div></div></td></tr><tr><td class="rv_tb-cell rv_tb-row-4-cell-0 rv_tb-rs-row-4-cell-0 rv_tb-cs-row-4-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">Load CLIP Vsion</div></div></div></td><td class="rv_tb-cell rv_tb-row-4-cell-1 rv_tb-rs-row-4-cell-1 rv_tb-cs-row-4-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text"><br>略，同基本工作流</div></div></div></td></tr><tr><td class="rv_tb-cell rv_tb-row-5-cell-0 rv_tb-rs-row-5-cell-0 rv_tb-cs-row-5-cell-0"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">LoraLoaderModelOnly</div></div></div></td><td class="rv_tb-cell rv_tb-row-5-cell-1 rv_tb-rs-row-5-cell-1 rv_tb-cs-row-5-cell-1"><div class="rv_tb-element"><div class="rv_tb-text-wrap rv_justify cell-element-0"><div class="rv_tb-text">《<a href="https://0to60.top/2025/06/04/comfyui%ef%bc%8c%e5%ae%9e%e6%88%98%e8%ae%b0%e5%bd%95/" data-type="post" data-id="702">ComfyUI，SD小白入门篇》</a>中提到，LoRA即会影响Unet Model，又会影响CLIP，但FaceID这里只需要影响UnetModel，如果还是通过原有的Load LoRA节点进行加载，工作流的线条连结会变得更复杂，因此，这里推荐使用LoraLoaderModelOnly节点。</div></div></div></td></tr></tbody></table></figure>
<p><a rel="nofollow" href="https://0to60.top/comfyui%ef%bc%8c%e6%8e%a7%e5%9b%be%e7%af%87-%e5%9b%be%e5%83%8f%e6%9d%a1%e4%bb%b6%e5%8f%af%e6%8e%a7-ipadapter/">ComfyUI，SD控图入门篇，IPAdapter</a>最先出现在<a rel="nofollow" href="https://0to60.top">0260</a>。</p>
]]></content:encoded>
					
					<wfw:commentRss>https://0to60.top/comfyui%ef%bc%8c%e6%8e%a7%e5%9b%be%e7%af%87-%e5%9b%be%e5%83%8f%e6%9d%a1%e4%bb%b6%e5%8f%af%e6%8e%a7-ipadapter/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>Hello，ComfyUI</title>
		<link>https://0to60.top/hello%ef%bc%8ccomfyui/</link>
					<comments>https://0to60.top/hello%ef%bc%8ccomfyui/#respond</comments>
		
		<dc:creator><![CDATA[burson]]></dc:creator>
		<pubDate>Wed, 04 Jun 2025 06:19:27 +0000</pubDate>
				<category><![CDATA[ComfyUI-SD]]></category>
		<category><![CDATA[ComfyUI]]></category>
		<guid isPermaLink="false">https://0to60.top/?p=694</guid>

					<description><![CDATA[<p>引言 本文的作用主要是搭建部署Com&#46;&#46;&#46;</p>
<p><a rel="nofollow" href="https://0to60.top/hello%ef%bc%8ccomfyui/">Hello，ComfyUI</a>最先出现在<a rel="nofollow" href="https://0to60.top">0260</a>。</p>
]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading">引言</h2>



<p>本文的作用主要是搭建部署ComfyUI，并尝试构建工作流（文生图和图生图）</p>



<h2 class="wp-block-heading">一、ComfyUI快速部署</h2>



<h3 class="wp-block-heading">1.1 选择GPU服务器厂商</h3>



<p>这里我选择的是优云智算，价格亲民，GPU挺充足，过往选择的是AutoDL，经常遇到GPU不够的情况。</p>


https://passport.compshare.cn/register?referral_code=4tuMbi2nPCLBv4tAtMkjvr


<h3 class="wp-block-heading">1.2 购买服务器实例并部署comfyUI</h3>



<ul class="wp-block-list">
<li>确认了GPU类型，个人比较倾向3090或者3080TI，无他性价比高，显存够</li>



<li>镜像安装，选择一个带ComfyUI的镜像，<strong>ComfyUI-Wanx-I2V</strong>，随便选一个吧</li>
</ul>



<figure class="wp-block-image size-large is-resized"><img loading="lazy" decoding="async" width="1024" height="486" src="https://0to60.top/wp-content/uploads/2025/06/image-1024x486.png" alt="" class="wp-image-696" style="width:720px" srcset="https://0to60.top/wp-content/uploads/2025/06/image-1024x486.png 1024w, https://0to60.top/wp-content/uploads/2025/06/image-300x142.png 300w, https://0to60.top/wp-content/uploads/2025/06/image-768x364.png 768w, https://0to60.top/wp-content/uploads/2025/06/image-1536x729.png 1536w, https://0to60.top/wp-content/uploads/2025/06/image.png 1920w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<h3 class="wp-block-heading">1.3 启动ComfyUI并进入应用</h3>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="486" src="https://0to60.top/wp-content/uploads/2025/06/image-1-1024x486.png" alt="" class="wp-image-697" srcset="https://0to60.top/wp-content/uploads/2025/06/image-1-1024x486.png 1024w, https://0to60.top/wp-content/uploads/2025/06/image-1-300x142.png 300w, https://0to60.top/wp-content/uploads/2025/06/image-1-768x364.png 768w, https://0to60.top/wp-content/uploads/2025/06/image-1-1536x729.png 1536w, https://0to60.top/wp-content/uploads/2025/06/image-1.png 1920w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>DONE</p>



<h2 class="wp-block-heading">二、Hello，ComfyUI</h2>



<h3 class="wp-block-heading">2.1 文生图</h3>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="486" src="https://0to60.top/wp-content/uploads/2025/06/image-2-1024x486.png" alt="" class="wp-image-698" srcset="https://0to60.top/wp-content/uploads/2025/06/image-2-1024x486.png 1024w, https://0to60.top/wp-content/uploads/2025/06/image-2-300x142.png 300w, https://0to60.top/wp-content/uploads/2025/06/image-2-768x364.png 768w, https://0to60.top/wp-content/uploads/2025/06/image-2-1536x729.png 1536w, https://0to60.top/wp-content/uploads/2025/06/image-2.png 1920w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<h3 class="wp-block-heading">2.2 图生图</h3>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="486" src="https://0to60.top/wp-content/uploads/2025/06/image-3-1024x486.png" alt="" class="wp-image-699" srcset="https://0to60.top/wp-content/uploads/2025/06/image-3-1024x486.png 1024w, https://0to60.top/wp-content/uploads/2025/06/image-3-300x142.png 300w, https://0to60.top/wp-content/uploads/2025/06/image-3-768x364.png 768w, https://0to60.top/wp-content/uploads/2025/06/image-3-1536x729.png 1536w, https://0to60.top/wp-content/uploads/2025/06/image-3.png 1920w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p></p>



<div class="wp-block-cover aligncenter"><img loading="lazy" decoding="async" width="612" height="216" class="wp-block-cover__image-background wp-image-137" alt="" src="https://0to60.top/wp-content/uploads/2025/05/istockphoto-1368531657-612x612-1.jpg" data-object-fit="cover" srcset="https://0to60.top/wp-content/uploads/2025/05/istockphoto-1368531657-612x612-1.jpg 612w, https://0to60.top/wp-content/uploads/2025/05/istockphoto-1368531657-612x612-1-300x106.jpg 300w" sizes="auto, (max-width: 612px) 100vw, 612px" /><span aria-hidden="true" class="wp-block-cover__background has-background-dim"></span><div class="wp-block-cover__inner-container is-layout-flow wp-block-cover-is-layout-flow">
<p class="has-text-align-center has-large-font-size"></p>
</div></div>



<p></p>
<p><a rel="nofollow" href="https://0to60.top/hello%ef%bc%8ccomfyui/">Hello，ComfyUI</a>最先出现在<a rel="nofollow" href="https://0to60.top">0260</a>。</p>
]]></content:encoded>
					
					<wfw:commentRss>https://0to60.top/hello%ef%bc%8ccomfyui/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
		<item>
		<title>ComfyUI，启动</title>
		<link>https://0to60.top/comfyui%ef%bc%8c%e5%90%af%e5%8a%a8/</link>
					<comments>https://0to60.top/comfyui%ef%bc%8c%e5%90%af%e5%8a%a8/#respond</comments>
		
		<dc:creator><![CDATA[burson]]></dc:creator>
		<pubDate>Fri, 30 May 2025 07:32:23 +0000</pubDate>
				<category><![CDATA[ComfyUI-SD]]></category>
		<category><![CDATA[ComfyUI]]></category>
		<guid isPermaLink="false">https://0to60.top/?p=651</guid>

					<description><![CDATA[<p>引言 本文的目的是阐述为什么要研究C&#46;&#46;&#46;</p>
<p><a rel="nofollow" href="https://0to60.top/comfyui%ef%bc%8c%e5%90%af%e5%8a%a8/">ComfyUI，启动</a>最先出现在<a rel="nofollow" href="https://0to60.top">0260</a>。</p>
]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading">引言</h2>



<p>本文的目的是阐述为什么要研究ComfyUI及MVP执行路径</p>



<h2 class="wp-block-heading">一、为什么要研究ComfyUI</h2>



<ul class="wp-block-list">
<li>自媒体创作过程中需要内容，而内容的表现形式就是图片或视频，ComfyUI能为内容创作赋能。</li>



<li>过往研究过SD WebUI，而ComfyUI更为灵活，逻辑性更强，目前社区活跃度也很高，所以选择它。</li>
</ul>



<h2 class="wp-block-heading">二、MVP执行路径</h2>



<ol class="wp-block-list">
<li>快速选择一家GPU服务器平台，不需要过多在意价格。</li>



<li>购买服务器并部署ComfyUI</li>



<li>输出第一张图片</li>



<li>输出《Hello，ComfyUI》</li>
</ol>
<p><a rel="nofollow" href="https://0to60.top/comfyui%ef%bc%8c%e5%90%af%e5%8a%a8/">ComfyUI，启动</a>最先出现在<a rel="nofollow" href="https://0to60.top">0260</a>。</p>
]]></content:encoded>
					
					<wfw:commentRss>https://0to60.top/comfyui%ef%bc%8c%e5%90%af%e5%8a%a8/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		
			</item>
	</channel>
</rss>
