<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	>

<channel>
	<title>Comfyui实践篇 &#8211; 0260</title>
	<atom:link href="https://0to60.top/tag/comfyui%E5%AE%9E%E8%B7%B5%E7%AF%87/feed/" rel="self" type="application/rss+xml" />
	<link>https://0to60.top</link>
	<description>Zero To More</description>
	<lastBuildDate>Wed, 17 Sep 2025 07:55:06 +0000</lastBuildDate>
	<language>zh-Hans</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	<generator>https://wordpress.org/?v=6.9</generator>

 
	<item>
		<title>ComfyUI，一致性角色实战篇</title>
		<link>https://0to60.top/comfyui%ef%bc%8c%e4%b8%80%e8%87%b4%e6%80%a7%e8%a7%92%e8%89%b2%e5%ae%9e%e6%88%98%e7%af%87/</link>
					<comments>https://0to60.top/comfyui%ef%bc%8c%e4%b8%80%e8%87%b4%e6%80%a7%e8%a7%92%e8%89%b2%e5%ae%9e%e6%88%98%e7%af%87/#respond</comments>
		
		<dc:creator><![CDATA[burson]]></dc:creator>
		<pubDate>Mon, 15 Sep 2025 09:25:00 +0000</pubDate>
				<category><![CDATA[ComfyUI-SD]]></category>
		<category><![CDATA[ComfyUI]]></category>
		<category><![CDATA[Comfyui实践篇]]></category>
		<guid isPermaLink="false">https://0to60.top/?p=1015</guid>

					<description><![CDATA[<p>引言 为避免陷入学习陷阱，以阶段性目&#46;&#46;&#46;</p>
<p><a rel="nofollow" href="https://0to60.top/comfyui%ef%bc%8c%e4%b8%80%e8%87%b4%e6%80%a7%e8%a7%92%e8%89%b2%e5%ae%9e%e6%88%98%e7%af%87/">ComfyUI，一致性角色实战篇</a>最先出现在<a rel="nofollow" href="https://0to60.top">0260</a>。</p>
]]></description>
										<content:encoded><![CDATA[
<h2 class="wp-block-heading">引言</h2>



<p>为避免陷入学习陷阱，以阶段性目标为导引，约束学习路径。</p>



<p>本期学习目标为实现一致性角色，最终表现为训练一个Flux LoRa模型，整体思路为：</p>



<ul class="wp-block-list">
<li>人物角色初始化：得到一张满意的T-POSE正面全身照和高清的头部图像。</li>



<li>角色多视角实现：基于初始化角色，得到不同视角的的图片。</li>



<li>丰富角色表情：给所有图片增加表情。</li>



<li>丰富背景：给增加表情后的图片增加背景。（至此，得到训练模型的图片素材）</li>



<li>素材处理：给素材批量修改大小及打标。</li>



<li>模型训练。</li>
</ul>



<h2 class="wp-block-heading">一、人物角色初始化</h2>



<p>此步骤较为简单，目标有两个。</p>



<ul class="wp-block-list">
<li>获得一张T-POSE正面全身照。</li>



<li>得到头部的高清放大图，作为之后脸部特征的依据。</li>
</ul>



<h3 class="wp-block-heading">1.1 T-Pose角色图生成</h3>



<p>此处采用的是FLUX模型。</p>



<p>一般来说，获取一个角色要么随机，要么是想这个角色带有某个目标人物的脸部特征。在Flux中，可加入一个PuLID模块，用于提取目标人物的脸部特征。</p>



<p>工作流图示如下：</p>



<figure class="wp-block-image size-full"><img fetchpriority="high" decoding="async" width="1920" height="1080" src="https://0to60.top/wp-content/uploads/2025/09/image.png" alt="" class="wp-image-1018" srcset="https://0to60.top/wp-content/uploads/2025/09/image.png 1920w, https://0to60.top/wp-content/uploads/2025/09/image-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-700x394.png 700w" sizes="(max-width: 1920px) 100vw, 1920px" /></figure>



<h3 class="wp-block-heading">1.2 头部高清放大</h3>



<p>基本思路：</p>



<ul class="wp-block-list">
<li>提取T-Pose角色的头部区域</li>



<li>SD放大+FaceDetail，SD放大过程中为防止和原图相差过大，通过图片反推提示词+ControlNet_Tile保留原图特征。</li>



<li>为保留原角色图片色彩风格，增加一个色彩匹配节点。</li>
</ul>



<p>工作流图示如下：</p>



<figure class="wp-block-image size-full"><img decoding="async" width="1920" height="1080" src="https://0to60.top/wp-content/uploads/2025/09/image-1.png" alt="" class="wp-image-1020" srcset="https://0to60.top/wp-content/uploads/2025/09/image-1.png 1920w, https://0to60.top/wp-content/uploads/2025/09/image-1-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-1-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-1-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-1-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-1-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-1-700x394.png 700w" sizes="(max-width: 1920px) 100vw, 1920px" /></figure>



<h2 class="wp-block-heading">二、角色多视角图</h2>



<p>此步骤主要目的是基于第一步得到的正面T-Pose图片得到多视角图，整体思路为：</p>



<ul class="wp-block-list">
<li>获取多视角参考图，为后续的图片生成提供Controlnet引导(Pose，depth，cany)。(如果已经有了多视角CN控制图，可以省略此步)</li>



<li>基于T-Pose图，在多视角参考图提供的CN控制下，生成多视角图片。</li>



<li>由于参考生图通常会使得面部特征相差较大，还需进行脸部处理，用换脸思路即可。</li>
</ul>



<h3 class="wp-block-heading">2.1 获取多视角参考图</h3>



<p>可以在网上搜寻多视角的参考图，关键词如：pose sheet，但通常很难得到能直接用的参考图。</p>



<figure class="wp-block-image size-large"><img decoding="async" width="1024" height="486" src="https://0to60.top/wp-content/uploads/2025/09/image-3-1024x486.png" alt="" class="wp-image-1022" srcset="https://0to60.top/wp-content/uploads/2025/09/image-3-1024x486.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-3-300x142.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-3-768x364.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-3-1536x729.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-3.png 1920w" sizes="(max-width: 1024px) 100vw, 1024px" /></figure>



<p>有些UP/博主也会直接分享通用的CN控制图，如下图，但抽卡表现上稳定性不高。</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="297" height="300" src="https://0to60.top/wp-content/uploads/2025/09/image-2.png" alt="" class="wp-image-1021"/></figure>



<p>在多次实践后，为生成较为稳定的结果，本次采用的思路是：</p>



<ul class="wp-block-list">
<li>利用MV-Adapter，基于T-Pose图先生成<strong>六视角</strong>图，虽然细节特征会相差较大，但整体身型/色彩的一致性有较好表现，之后提取的CN控制图会较为契合目标。</li>
</ul>



<p>工作流图示如下：</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1920" height="1080" src="https://0to60.top/wp-content/uploads/2025/09/image-4.png" alt="" class="wp-image-1023" srcset="https://0to60.top/wp-content/uploads/2025/09/image-4.png 1920w, https://0to60.top/wp-content/uploads/2025/09/image-4-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-4-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-4-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-4-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-4-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-4-700x394.png 700w" sizes="auto, (max-width: 1920px) 100vw, 1920px" /></figure>



<h3 class="wp-block-heading">2.2 生成多视角角色图片</h3>



<p><strong>前置准备：</strong></p>



<p>T-Pose图提供整体身型、服饰、发型等元素信息，多视角参考图提供CN-控制信息(本次使用是Pose图)。</p>



<p><strong>整体思路：</strong></p>



<ul class="wp-block-list">
<li>利用Fill+Redux的组合，可以生成和T-Pose相似的图片(整体身型，内容，色彩等)，但缺点是姿势不能控制。（PS: Comfyui自带的Redux相当强大，极难更改姿势信息，建议使用Redux Adv）</li>



<li>在Fill+Redux的基础上，增加Controlnet模块，控制姿势。</li>
</ul>



<p>工作流核心图示如下：</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1920" height="1080" src="https://0to60.top/wp-content/uploads/2025/09/image-5.png" alt="" class="wp-image-1024" srcset="https://0to60.top/wp-content/uploads/2025/09/image-5.png 1920w, https://0to60.top/wp-content/uploads/2025/09/image-5-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-5-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-5-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-5-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-5-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-5-700x394.png 700w" sizes="auto, (max-width: 1920px) 100vw, 1920px" /></figure>



<p>Fill+Redux组合工作流采用后得到想要的多视角图片</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1920" height="1080" src="https://0to60.top/wp-content/uploads/2025/09/image-6.png" alt="" class="wp-image-1025" srcset="https://0to60.top/wp-content/uploads/2025/09/image-6.png 1920w, https://0to60.top/wp-content/uploads/2025/09/image-6-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-6-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-6-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-6-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-6-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-6-700x394.png 700w" sizes="auto, (max-width: 1920px) 100vw, 1920px" /></figure>



<h3 class="wp-block-heading"><strong>2.3 </strong>换脸</h3>



<p>2.2 生成的多视角图虽然整体内容非常相似，但在脸部特征上会有较大区别，所以会在此基础上，利用换脸工作流进行脸部特征处理，此时1.2步骤中得到的头部图就起到了比较好的作用。</p>



<p>换脸工作流较多，SD1,5/XL或Flux都有较为成熟的处理方法，本次采用的是Flux，整体思路为：</p>



<ul class="wp-block-list">
<li>将目标图片(需要换脸的图片)的头部区域截取下来。</li>



<li>将其Resize到100W像素左右(+-30%)，Flux在此大小上生成效果较好。</li>



<li>通过Fill重绘，对截取后图片的脸部区域进行重绘，此处同时采用了PuLID+Redux+Ace(Potrait)进行脸部特征提取。</li>



<li>将重绘后的图片还原到目标图片上。</li>
</ul>



<p>工作流核心图示如下：</p>



<p>重绘前的准备</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1920" height="1080" src="https://0to60.top/wp-content/uploads/2025/09/image-7.png" alt="" class="wp-image-1026" srcset="https://0to60.top/wp-content/uploads/2025/09/image-7.png 1920w, https://0to60.top/wp-content/uploads/2025/09/image-7-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-7-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-7-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-7-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-7-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-7-700x394.png 700w" sizes="auto, (max-width: 1920px) 100vw, 1920px" /></figure>



<p>Fill+Redux+Ace(Portrait)+PuLID应用：</p>



<figure class="wp-block-image size-large"><img loading="lazy" decoding="async" width="1024" height="576" src="https://0to60.top/wp-content/uploads/2025/09/image-9-1024x576.png" alt="" class="wp-image-1028" srcset="https://0to60.top/wp-content/uploads/2025/09/image-9-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-9-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-9-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-9-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-9-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-9-700x394.png 700w, https://0to60.top/wp-content/uploads/2025/09/image-9.png 1920w" sizes="auto, (max-width: 1024px) 100vw, 1024px" /></figure>



<p>将重绘后的脸部区域还原到目标图片中</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1920" height="1080" src="https://0to60.top/wp-content/uploads/2025/09/image-10.png" alt="" class="wp-image-1029" srcset="https://0to60.top/wp-content/uploads/2025/09/image-10.png 1920w, https://0to60.top/wp-content/uploads/2025/09/image-10-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-10-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-10-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-10-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-10-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-10-700x394.png 700w" sizes="auto, (max-width: 1920px) 100vw, 1920px" /></figure>



<h2 class="wp-block-heading">三、丰富角色表情</h2>



<p>此步骤的目标为生成的多视角图进行表情处理，整体思路为：</p>



<ul class="wp-block-list">
<li>图片剪裁，只保留肩部以上区域。这样大多的素材会集中在头部。</li>



<li>表情移植</li>
</ul>



<h3 class="wp-block-heading">3.1 图片剪裁</h3>



<p>略，随便搭个工作流即能批量处理即可</p>



<h3 class="wp-block-heading">3.2 表情移植</h3>



<p>表情移植的一个准备工作是找到表情参考图片，方法有很多，这里列举一些：</p>



<ul class="wp-block-list">
<li>可以自己生成一些表情图，但是通过基本模型生成的人物表情不够丰富，可以寻找一些表情LoRa.</li>



<li>寻找一些现成的表情参考图，如搜索关键词：Expression Sheet.</li>
</ul>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1920" height="911" src="https://0to60.top/wp-content/uploads/2025/09/image-12.png" alt="" class="wp-image-1032" srcset="https://0to60.top/wp-content/uploads/2025/09/image-12.png 1920w, https://0to60.top/wp-content/uploads/2025/09/image-12-300x142.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-12-1024x486.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-12-768x364.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-12-1536x729.png 1536w" sizes="auto, (max-width: 1920px) 100vw, 1920px" /></figure>



<p>有了表情参考图后，就可以利用livePortrait进行表情移植，值得一提的是，LP在不同风格（写实、3D、2D）的表情移植上不如人意，特别是眼睛的表达上，在此，经过大量实践，得到以下工作流。</p>



<p>工作流图示如下：</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1920" height="1080" src="https://0to60.top/wp-content/uploads/2025/09/image-11.png" alt="" class="wp-image-1031" srcset="https://0to60.top/wp-content/uploads/2025/09/image-11.png 1920w, https://0to60.top/wp-content/uploads/2025/09/image-11-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-11-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-11-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-11-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-11-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-11-700x394.png 700w" sizes="auto, (max-width: 1920px) 100vw, 1920px" /></figure>



<h2 class="wp-block-heading">四、丰富背景</h2>



<p>丰富背景的方式有很多，Flux也可以利用Fill+Redux重绘的方式进行背景更换，本次采用的是IC-Light，截止写作为止，IC-Light本地部署只支持了SD15，效果上，会兼顾背景和光影效果。</p>



<p>工作流图示如下：</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1920" height="1080" src="https://0to60.top/wp-content/uploads/2025/09/image-13.png" alt="" class="wp-image-1033" srcset="https://0to60.top/wp-content/uploads/2025/09/image-13.png 1920w, https://0to60.top/wp-content/uploads/2025/09/image-13-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-13-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-13-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-13-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-13-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-13-700x394.png 700w" sizes="auto, (max-width: 1920px) 100vw, 1920px" /></figure>



<p>至此训练素材就生成完毕(生成过程中，其实也可以用InstantID保持脸部特征)</p>



<figure class="wp-block-video"><video height="906" style="aspect-ratio: 680 / 906;" width="680" controls src="https://0to60.top/wp-content/uploads/2025/09/20250915164626_rec_.mp4"></video></figure>



<h2 class="wp-block-heading">五、素材处理</h2>



<h3 class="wp-block-heading">5.1 图片大小的预处理</h3>



<p>由于本次生成图片都是768*768，比较符合训练需要，所以实践中赞未涉及。</p>



<p>略</p>



<h3 class="wp-block-heading"><strong>5.2 图片批量打标</strong></h3>



<p>很多模型训练工具都提供了打标功能，如Fluxgym、秋叶模型训练器等，但是都有其缺点，目前采用的是joy caption，注意：触发提示词尽量要用一个不常见的词，实践中用的Utwin其实并不可取。</p>



<p>工作流图示如下：</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1920" height="1080" src="https://0to60.top/wp-content/uploads/2025/09/image-14.png" alt="" class="wp-image-1036" srcset="https://0to60.top/wp-content/uploads/2025/09/image-14.png 1920w, https://0to60.top/wp-content/uploads/2025/09/image-14-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-14-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-14-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-14-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-14-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-14-700x394.png 700w" sizes="auto, (max-width: 1920px) 100vw, 1920px" /></figure>



<h2 class="wp-block-heading"><strong>六、模型训练</strong></h2>



<p>模型训练，可以随便选一个训练器，本次实践中采用的是秋叶模型训练器。</p>



<figure class="wp-block-image size-full"><img loading="lazy" decoding="async" width="1920" height="1080" src="https://0to60.top/wp-content/uploads/2025/09/image-15.png" alt="" class="wp-image-1037" srcset="https://0to60.top/wp-content/uploads/2025/09/image-15.png 1920w, https://0to60.top/wp-content/uploads/2025/09/image-15-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-15-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-15-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-15-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-15-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-15-700x394.png 700w" sizes="auto, (max-width: 1920px) 100vw, 1920px" /></figure>



<p>主要参数：</p>



<ul class="wp-block-list">
<li>每张图片训练次数/epoch，没地方设置，但是实际是5次/img，当你开始训练时，会将图片路径下的所有图片打包到一个文件夹，以times_randomtext的形式进行命名，times即表示每张图片训练多少次。</li>



<li>epoch总数，实践设置为10</li>



<li>每2epoch保存一次模型</li>



<li>network_dims为32</li>



<li>学习率：1e-4</li>
</ul>



<p>这是第一次进行模型训练，有很多影响因素值得研究，仅这次而言。</p>



<ul class="wp-block-list">
<li>模型明显将发型及晚礼服肩带特征学习进去，这不在目标范围，明显和素材及打标方式强相关。</li>



<li>脸部特征其实并没有达到我想要的程度，不过在3D风格上有意想不到的效果</li>
</ul>



<p>实际生图效果如下：</p>



<figure class="wp-block-image size-full lightbox"><img loading="lazy" decoding="async" width="1920" height="1080" src="https://0to60.top/wp-content/uploads/2025/09/image-16.png" alt="" class="wp-image-1038" srcset="https://0to60.top/wp-content/uploads/2025/09/image-16.png 1920w, https://0to60.top/wp-content/uploads/2025/09/image-16-300x169.png 300w, https://0to60.top/wp-content/uploads/2025/09/image-16-1024x576.png 1024w, https://0to60.top/wp-content/uploads/2025/09/image-16-768x432.png 768w, https://0to60.top/wp-content/uploads/2025/09/image-16-1536x864.png 1536w, https://0to60.top/wp-content/uploads/2025/09/image-16-520x293.png 520w, https://0to60.top/wp-content/uploads/2025/09/image-16-700x394.png 700w" sizes="auto, (max-width: 1920px) 100vw, 1920px" /></figure>



<p>至此，人物角色一致性的实战就到此告一段落，每个步骤都有多种方式实现，之后会尝试以视频的方式进行分享讲解，敬请期待。</p>
<p><a rel="nofollow" href="https://0to60.top/comfyui%ef%bc%8c%e4%b8%80%e8%87%b4%e6%80%a7%e8%a7%92%e8%89%b2%e5%ae%9e%e6%88%98%e7%af%87/">ComfyUI，一致性角色实战篇</a>最先出现在<a rel="nofollow" href="https://0to60.top">0260</a>。</p>
]]></content:encoded>
					
					<wfw:commentRss>https://0to60.top/comfyui%ef%bc%8c%e4%b8%80%e8%87%b4%e6%80%a7%e8%a7%92%e8%89%b2%e5%ae%9e%e6%88%98%e7%af%87/feed/</wfw:commentRss>
			<slash:comments>0</slash:comments>
		
		<enclosure url="https://0to60.top/wp-content/uploads/2025/09/20250915164626_rec_.mp4" length="4916764" type="video/mp4" />

			</item>
	</channel>
</rss>
