New Delhi: In the past several days, a surge of Ghibli-style AI portraits has taken social media by storm, allowing users to transform their photos into the signature aesthetic of legendary animator Hayao Miyazaki.
While the Ghibli AI trend has been widely embraced for its whimsical and nostalgic appeal, it has also ignited concerns over digital privacy, intellectual property rights and the potential misuse of personal data.
The AI-generated images, powered by OpenAI’s ChatGPT, have gained immense popularity. OpenAI’s CEO, Sam Altman, took the success of the Ghibli AI trend to social media and wrote:
“It’s super fun seeing people love images in chatgpt. but our GPUs are melting. We are going to temporarily introduce some rate limits while we work on making it more efficient. Hopefully won’t be long! ChatGPT free tier will get 3 generations per day soon.”
India has emerged as a major participant in this AI-powered artistic wave. Praising the country’s rapid adoption of AI, Altman noted:
“What’s happening with AI adoption in India right now is amazing to watch. We love to see the explosion of creativity—India is outpacing the world.”
Rising Concerns Over Data Privacy and Legal Complexities
Despite the viral success of the trend, concerns about privacy and security are mounting across the globe. Experts have raised alarms that OpenAI and other AI-driven platforms may be quietly collecting personal data on a massive scale.
Many users, without fully understanding the implications, are voluntarily handing over their images, which could be stored, analyzed, or even used for AI training without explicit consent.
Pavan Duggal, a cyber law expert and Supreme Court advocate, warned of the legal grey areas surrounding AI-generated art. He told APAC Media:
“The current Ghibli AI trend is bringing forward a large number of legal, policy, and regulatory challenges from a user standpoint. First and foremost, users need to understand that copyright issues present a grey zone in this area. Whether Ghibli-style AI art violates the copyright of another entity is a complex question that must be carefully considered. There are other potential intellectual property rights concerns as well. Commercial exploitation of recognizable, distinctive Ghibli aesthetics could constitute unfair competition as well as trademark dilution in some jurisdictions.”
Duggal also highlighted the privacy risks associated with uploading personal images.
“Generation of Ghibli images by users uploading their personal photos could have ramifications on personal data protection. Users’ privacy rights may be impacted as this technology evolves. Users need to handle such issues with care. It is advisable to review the terms and conditions and policies of current platforms before uploading personal images, as these may become part of the public domain and could impact privacy, both personal privacy as well as data privacy.”
Also read – ‘AI is Writing the Code for Humanity’: PM Modi at AI Action Summit in Paris
Data Security and Identity Fraud Risks
Experts in identity verification and fraud prevention have also voiced serious concerns over the security of personal data. Paritosh Desai, Chief Product Officer at IDfy, outlined the risks of AI-powered image manipulation and data collection. He told APAC Media:
“Most users don’t realize that when they upload images, some AI apps may store, analyze, or even share them—often without clear consent. Many companies use this data to train AI models or for other hidden purposes, which raises serious privacy concerns. To stay safe, users should look for AI tools that provide clear options to delete images, opt out of AI training, and limit how long their data is stored. Companies that fail to offer this transparency could soon face heavy fines under stricter laws like the EU AI Act and India’s DPDP Act, which are cracking down on misuse of personal data and vague consent practices.”
He also warned about the threats posed by potential data breaches.
“The biggest risks? Data leaks, hacking, and proliferation of identity fraud. If an AI platform is breached, personal images could be misused to create fake identities, forge documents, or commit fraud. Fraudsters can generate fake Aadhaar cards, tampered ID proofs, or even manipulate insurance claim photos to deceive verification systems. As these attacks evolve, fraud detection must stay one step ahead with advanced tampering detection, image integrity verification, and digital provenance tracking to flag manipulated content. Businesses leveraging AI must embed privacy and fraud prevention measures at every stage—or risk exposure to fraud, reputational damage, and regulatory penalties.”
Deepfake Threats and the Future of AI Security
The conversation around AI-generated images is also tied to the growing sophistication of deepfake technology, which presents a significant challenge for cybersecurity and identity verification systems. Vijender Yadav, Co-founder, CEO & MD of Accops, stressed the importance of responsible AI adoption.
“The popularity of trends like ‘Ghibli AI’ serves as a timely reminder for all of us to be more aware of how our personal data, including photographs, might be utilized online. As technology progresses, it presents both immense opportunities and new challenges. One such challenge is the rise of sophisticated deepfakes, which misuse AI to create realistic fakes, posing a significant threat to security systems that rely on identity verification, like facial recognition.”
While responding to APAC Media’s query, Yadav further emphasized the need for proactive measures.
“At Accops, we believe in harnessing technology responsibly to counter these risks. That’s why we are focused on innovations like integrating cutting-edge deepfake detection technology from partners like Pi Labs directly into our Accops BioAuth facial authentication solution. Our aim is not to hinder technological progress but to build the necessary safeguards, ensuring that tools like facial recognition remain secure and trustworthy for organizations navigating the complexities of the modern digital world and protecting them from evolving threats like AI-generated identity fraud.”
The Balancing Act
The Ghibli AI trend highlights the transformative potential of artificial intelligence in digital artistry. However, as AI-generated images continue to flood social media, users must remain aware of the privacy and security risks involved.
Experts urge individuals to exercise caution, read platform policies carefully, and demand transparency from AI service providers to safeguard their personal data.
While the creative possibilities are boundless, the rapid evolution of AI-driven image generation underscores the pressing need for clear legal frameworks and stronger digital security measures to protect users in an increasingly interconnected world.
Also read –
Discussion about this post