Many communicators have had a bumpy return to work as the X saga continues – this time with its AI ‘Grok’. The social media corporation recently enabled image-generating features to its AI recently, which had given some X users the ability to bypass safety filters and create illicit images of others without their consent and distribute this content across the dark web.
Understandably, the sector is very concerned about AI being used for nefarious reasons, adding further questions about the role of this evolving technology. X is also under scrutiny again by charities, as the sector revisits conversations about the online spaces they occupy and the knock-on effects of decisions made by Big Tech.
Our role at CharityComms is to convene insights and resources, working with communication experts working in and with the sector. That’s why we’re compiling what we know so far about Grok, responses from the sector, and existing resources to help all charities make sense of this issue for themselves.
We’ll be covering a few themes in this piece, as the issue crosses social media, AI, community safeguarding and more – and we’ll keep this blog updated as applicable information and more charity examples come in.
What’s the Grok and X AI issue?
In a nutshell, here’s what’s happened so far with AI features from X:
- Concerns since rollout: xAI software Grok was rolled out to UK users in May 2024. Critics have raised their fears about the guardrails of Grok compared to other AI software, with reports of high levels of misinformation, hate speech and hallucinations.
- A dangerous new feature: The ability to generate images was introduced in late 2025. Since then, many non-consensual, sexualised deepfakes of women and children have been created and disseminated by X users.
- Regulatory response: In the UK, the Information Commissioners Office and Ofcom continue thier investigations as it questions breaches of UK data protection laws by the platform (specifically The Online Safety Act). Keir Starmer has issued a statement that “all options are on the table”, including nationwide bans or hefty fines, if X fails to act in accordance with regulation. The response from the government aligns with the Government’s Violence Against Women and Girls (VAWG) Strategy, driving forward motions of the criminalisation of creating non-consensual, sexualised imagery. Ofcom reinforced their partnership with the Internet Watch Foundation (IWF) to advocate for changes needed to combat generated sexualised imagery from AI.
- X’s response: X had reportedly issued a series of updates to its software since Starmer’s statements, with safety researchers and MPs concerned about the effectiveness of these fixes. Examples of which include a data-processing pause, temporarily suspending the use of personal data of UK and EU users to train its AI, to changing its filter parameters. X had moved AI-generation behind its premium paywall and revised its terms of services as a counter measure, though for a short period, Grok was still being allowed to make images. Breaking news on Thursday 15 January reports that Grok is now officially blocked all users from being able to create images that undress real people, though concerns have been raised that historical content remains accessible on the platform. Elon Musk (X’s owner) has criticised the UK’s regulators, accusing it of censorship.
- Sector response: UK charities and campaigners have been calling for regulatory intervention for some time as ethical, safeguarding and data issues mount up against X. Many charities have been considering their approach to the platform, balancing their need to support beneficiaries, protecting organisational risk, and the issues remaining in a “toxic environment”.
How can my charity navigate this issue?
Stay informed and protect your people
Use trusted resources to keep informed about what’s happening, so you have all the facts for decision-making and can tell your team about any impact it will have on your charity, its communication and campaigning activity.
Be aware that staff, trustees or beneficiaries may be personally impacted by this issue, so ensure you have robust safeguarding and wellbeing processes in place.
Social media moderation expert, Rebecca Fitzgerald, CEO of StrawberrySocial points out that “clear safeguarding duties … extend to the platforms and environments charities choose to operate in” and charities will need to uphold “UK safeguarding standards”.
Proactive risk management; embedding safety into campaign plans and operations; signposting to news, help and reporting systems, as well as learning what to monitor can all help to build up your safeguarding practices.
Remember the role of regulators in keeping people and charities safe. If your charity is being targeted by harassment, misinformation or deepfakes, report this as a serious incident to The Charity Commission. If you come across illicit imagery of children on platforms, the Internet Watch Foundation has a form for reporting child sexual abuse images. If you have a strong case for informing consultations, consider talking with the Ofcom and the ICO, if you aren’t already.
Assess the platform’s popularity and its purpose for you
More ethical red flags linked to the platform will likely see another exodus of personal and organisational accounts from X, whether that’s leaving completely or a series of dormant accounts. Research from YouGov unsurprisingly shows that many Britons agree that AI tools should not be allowed for nefarious reasons, and their support of Musk and X remain consistently low – so they may support charity decisions to leave platforms and/or get involved in campaigns around the issue.
Zoe Amar, digital consultant and co-leader of the Charity AI Task Force, told us “This is a moment for charities to review their presence on X… Do they align with your values and your duty of care to beneficiaries?”. Consult our downloadable social media checklist with a series of decision-making factors and prompts to help evaluate and decide on your approach to social media.
Rebecca and her team have been calling for charities to leave the platform, warning that proximity to platforms like this can damage public trust and organisational reputation. If your decision is to leave, Rebecca suggests to “be clear with comms for your audience, explain what is happening, why, and what they should do.”
For some charities, the decision-making may have got easier with this latest scandal, drawing clearer lines in the sand; while for others, the decisions may have become much more complicated.
Safi O’Shea, Head of Communications at NSPCC, told us that while they share the deep concern from the public and sector on this issue and are joining in the rallying cry to Ofcom to take enforcement action against X, they still receive contacts to their Helpline through the platform.
After their last review in February 2024, they have “significantly reduced the amount [they] post” but “decided to remain on the platform to ensure [they] could continue to moderate disclosures” from their audiences. Their decision “is now due for a review” and they “will be considering it carefully”.
Every charity needs to consider its own position for their social media strategy and how any changes may impact its audiences, staff and volunteers, making a decision that is ultimately in service of its purpose.
Deciding to stay or leave a platform doesn’t have to be a static response – simple frameworks for reviewing your approach can be revisited as and when news like this comes up. It’s important to stress-test your position and have things in place to support any change in direction.
Implement AI guardrails that follow best practice
Zoe’s advice also extends to AI: “the Grok story shows exactly why charities need to be proactive about AI. We’re seeing what happens when safety isn’t built in from the start”.
Seeing this news or campaigning against wrongful AI use might put your own approach into sharper focus, meaning you may need to decide where your organisation draws the practical and ethical lines.
Access AI policy templates to build your own, review the role of AI in your communication, and consult the terms and conditions of AI software carefully to make sure you are clear about your use cases for AI.
Be clear on the guidance
Campaigning or lobbying activity directed to government or private companies requires a clear approach rooted in fact, and must be in the best interests of your organisation and its purpose to avoid regulatory review.
Mazeda Alam, Head of Trustee Guidance at The Charity Commission for England and Wales, advised us that charities can look to their social media and campaigning and lobbying activity guidance to help in cases like this. Mazeda points out that “such activity can be an important way for charities to help deliver their purpose” but recommends ensuring you can “show how this decision was assessed and agreed by the charity’s trustees.”
Improve your understanding of the issue
It’s important that charities have strong digital literacy, understanding what misinformation, disinformation and deepfakes are and the impact they create.
Turn to the Full Fact website for clear outlines. Their resources, such as The Full Fact Toolkit and How to Spot a Deepfake, provides simple and practical ways to assess content.
How can the sector take action?
Consider a collective voice
The power we have as a sector is our voice and community power – and we’ve seen in the past that collaborative communications can generate more impact for issues and campaigns by extending beyond any organisation’s individual reach. The question for the sector now becomes how best they can do so.
Research from the Internet Watch Foundation has found that the consensus among the public is that children deserve to be safe online, yet mixed messaging can water down the impact. Thomas Dyson, Head of Marketing for the Internet Watch Foundation, wrote back in November 2025: “We need one clear and consistent message consistently repeated, that builds public momentum and political pressure” – calling for less fragmented advocacy in the child protection space to more coordinated communication.
If you’re a charity with a more tenuous link to the issue, you may be questioning what you can realistically do, but there are other collaborations and ties you can legitimately make.
Support the sector to lead AI conversations and guidance
The Charity AI Task Force was set up to involve the charity sector in national conversations about AI, to share what we need as charities, as well as what we can teach others.
Emma Bracegirdle, founder of The Saltways, has been running sector-wide research to understand what charities understand about AI-generated imagery. She agreed with Zoe that “the sector has a chance to lead on responsible AI use”. Guidance for charities on AI generation from The Saltways findings is set to launch in late February.
Support your peers
Support might not always mean you need to be the one leading the charge. Share campaigns, sign letters, promote the voices of experts, or just be there for your comms peers.
What are others saying and doing?
Full Fact called out X for their role in a statement by CEO, Chris Morris: “It’s high time that powerful tech platforms take their responsibilities for online safety seriously. [they cannot be] allowed to regulate themselves on tackling non-consensual deepfake content, as it’s self-evident that they have been more interested in profits than public safety.” This example shows charities can hold Big Tech to account.
It’s deeply worrying that @grok could create sexualised images of children. @X must act now by disabling image-editing features until robust safeguards are in place to stop this from happening again. https://t.co/WOmovvKKxc
— Lucy Faithfull Foundation (@Lucy_Faithfull_) January 7, 2026
The Lucy Faithfull Foundation posted on X, calling for the platform to disable its image generation features – being on the platform allowed them to directly tag X and join a mounting choir of others calling for change.
Refuge, a domestic violence charity, has swiftly reacted with a series of outputs, including an official statement, a handraiser across socials and email, and press activity. Laura Burnell, Head of Communications and Marketing at the charity, told us the statement, “saw widespread pick up in national and women’s lifestyle media… and over 1,700 people have signed [their] handraiser” already.
Their work has “urged the Government to deliver on measures to protect against online abuse…” and “in response, a number of [their] followers and supporters have shared their personal experiences of being subject to intimate image abuse”. The charity is now working on “creating clear guidance”. Laura credits their connection to experts in their “sector-leading Technology-Facilitated Abuse and Economic Empowerment team” in “shaping a rapid response to reports”.
While NSPCC looked at both practical resources and campaigning to support their audiences. Understanding that parents and carers need more information to support child safety online, while equally stressing that “parents cannot be solely responsible for mitigating the risks that platforms, like Grok, create for children”, the charity took a “two-fold” approach. As well as calling on regulatory bodies and government to take action, they have focused on being there for their audience: equipping parents in their network with the resources they need to make informed decisions about their children’s online activities.
Another “X-odus” (charities leaving the platform) is underway with organisations announcing their depatures. Organisations include The Wildlife Trusts, Sands, Plan International UK and Winston’s Wish.
Civil Society also report that some charities are quietly leaving, with inactivity on their accounts.
Madeleine Sugden, digital impact consultant, continues to monitor the numbers and activity of charities on various social media platforms, and the shifts happening in the sector. Updates on this work can be found on her blog.
The RSPB have taken measures to make their X account private and remove all historical content due to safeguarding concerns. The RSPB continue to hold an account, but do not have any plans to use it for the foreseeable.
Conclusion
While the news is shocking and concerning, we’re left with some optimism of the unfolding mobilisation across different sectors, where charities have been and will continue to utilise their ties and influence with regulatory bodies and the public.
Altogether we can hold Big Tech accountable for poor oversight, working toward protecting communities and communicating in a safer online world. We will be joining our charity members in following this process and sharing resources that help them communicate effectively.
We hope this piece has been useful in summarising the events to date and provided you with some practical steps forward, regardless of your cause, for your own response.
We are looking for thought leadership and case studies on how charities can respond to the burning issues in today’s world. Please reach out to our comms team at comms@charitycomms.org.uk to discuss your ideas.
More resources from CharityComms and beyond
- The Ofcom investigation: More information on the formal investigation to review if X breached the Online Safety Act by failing to prevent Grok from generating non-consensual sexual images and child abuse material.
- The ICO statement: A regulatory warning from the Information Commission Office, who will assess whether the platform’s “opt-out” data scraping model violates GDPR and individual privacy rights.
- The Guardian reports: Journalists covers government warnings and responses to Grok’s AI imagery scandal.
- Internet Watch Foundation (IWF) press statement on AI nudification: AI nudification and child protection Welcomes the UK Government’s strategy to ban AI “nudification” apps and highlights research on the criminal misuse of generative AI tools.
- How to keep your online communities safe: Expert tips on community management, moderation, and safeguarding staff and beneficiaries in increasingly toxic digital environments.
- How to migrate your X data: Charity Digital have a great technical guide on how charities can migrate their communities to alternative platforms to safeguard their data.
- The CharityComms AI and social media hub pages will keep you updated on the latest news and views in these areas. Bookmark them so you can return to resources and links applicable for charity communicators whenever you need.
- Leaving X gracefully: StrawberrySocial explore the steps charities can take to leave X.
