



中圖分類號:TS106;TP18 文獻標志碼:A 文章編號:1673-3851(2025)07-0556-15
引用格式:,.基于生成對抗網絡與穩定擴散模型的花卉絲巾圖案生成方法[J].學報(自然科學),2025,53(4):556-570.
Abstract:With floral scarf patterns as the research objects, this study proposed a dual-stage collaborative generation method combining generative adversarial networks (GANs) and stable diffusion models for rapid scarf pattern generation. First,we constructed an SDXL model-based scarf pattern augmentation workflow,establishing a floral scarf pattern dataset through systematic patern collection, preprocessing,and data augmentation. Subsequently,in the first stage of pattern generation,we improved conventional GANs by integrating both self-atention and border-attention mechanisms into the StyleGAN framework,developing the SAB-StyleGAN model to generate base floral scarf patterns. Finall,in the second stage of pattern generation,we built an image-to-image workflow based on the SDXL model, effectively grafting the detailed rendering capabilities of stable difusion models onto GANs to produce refined floral scarf patterns with enhanced controllability and precision.Experimental results demonstrated that the generated refined floral scarf patterns exhibited superior clarity,achieving an FID value as low as 41.25,which closely resembled authentic designer samples.This method provides an eficient solution for rapid scarf pattern generation, significantly reducing enterprise design costs, enhancing production efficiency,and advancing digital transformation in the fashion industry.
Key words: silk scarf pattern; pattern generation method; generative adversarial networks (GANs); stable diffusion models; image-to-image translation; data augmentation
0引言
絲巾作為一種經典的配飾,在時尚界占據著重要地位。近年來,隨著全球時尚產業的快速發展和用戶個性化需求的增加,絲巾圖案的設計效率和質量已成為影響產品競爭力的重要因素。然而,當前企業在設計絲巾圖案時主要采用人工方式,圖案質量嚴重依賴設計師的經驗和創意,而且設計效率低,短時間內難以推出符合流行趨勢的產品,滿足快速變化的市場;另外,傳統設計方法人力成本高,導致企業難以在激烈的市場競爭中維持價格優勢,進而影響其市場競爭力。因此,亟需一種絲巾圖案的快速設計方法,而基于計算機圖像處理的圖案生成方法為此問題提供了一種解決思路。
目前常用的圖案生成方法主要分為2類,一類是基于生成對抗網絡(Generativeadversarialnetworks,GANs)的方法,另一類是基于擴散模型(Diffusionmodels)的方法。在基于生成對抗網絡的圖案生成方法中,Radford 等[1]提出了DCGAN(Deepconvolutional generative adversarial networks)模型,通過卷積層結構可以生成穩定圖像,但圖案質量受到訓練數據規模的限制,數據不足時易導致模式崩潰;任雨佳等2也提出了一種基于DCGAN的服裝款式設計方法,在訓練數據不足時生成圖案紋理重復混亂。Arjovsky 等[3]提出了 WGAN(Wassersteingenerativeadversarial network)模型,該模型可以通過Wasserstein距離提升訓練穩定性,但仍需充足數據支撐。田樂等[4發現,紡織圖案具有復雜結構與高頻細節,對模型和數據集的要求更為嚴苛,小數據集易引發特征學習失效和模式崩潰。……