“All things are permitted for me, but not all things are of benefit. All things are permitted for me, but I will not be mastered by anything.”
I CORINTHIANS 6:12
Artificial Intelligence lays unparalleled gifts before us. We have every reason to receive many of them with gratitude. But with AI – like any potent technology – we must never assume that just because we can do something we should do it.
In the early 20th century, scientific advances delivered many similarly marvelous gifts to humanity. They yielded productivity and prosperity hardly imagined before, from household comforts to transportation to medicines. These same powers, however, delivered curses of an unprecedented magnitude also: poison gases that tore out men’s eyes and lungs, artillery that turned cities from beacons of civilization to smoking rubble.
Technologies merely amplify human capacities, both for good and for ill. AI will do the same –in ways both dramatic and subtle.
So we must never confuse AI’s “You can” with “We should.” Most of all, that requires that we intentionally choose the bounds of the times, places, and roles we will yield to our ever-morepowerful machines. If we do not, the decisions will be made for us: they will always take more.
What AI’s Power Will Do
As New York City’s first skyscrapers rose heavenward in the late 1800’s, the iconic churches that had long adorned the city’s skyline appeared to grow smaller. Their size – and all they held of wisdom and beauty – seemed to shrink, shadowed beneath new spires of steel and glass. Similarly, as AI’s capacities expand, all other sources of truth and authority will appear to diminish. We’ll find ourselves beguiled to believe that the Word of AI – not the teachings of Scripture, parental instruction, time-tested wisdom, enduring tradition, deep cultural values, or even our own senses – deserves the final say.
AI’s domination may not come as the “killer robots” or “computer overlords” that some imagine. More likely, AI will steadily, subtly shift our sense of what is real and true and good, of what it means to be human and what others mean to us, of who we are and what matters most.
To be sure, none of us knows exactly what this will include. Even the creators of AI do not fully understand how it does what it does. But of five things we can be fairly certain:
1 | AI will speak with the authority of a god. We will increasingly feel that AI offers the sum of all knowledge and insight, distilled into guidance that no human could ever provide. We will find it difficult not to feel, as the crowds shouted of King Herod, “This is the voice of a god, not of a man!” (Acts 12:22)
2 | AI will promote the values of its makers. Although presenting a veneer of neutrality, the underlying assumptions, political biases, social values, and moral vision presented by AI come from the people who built it – whether Western technologists, the government of China, or otherwise. Some AI will provide more diversity of perspective than others, but most systems today measure all things by the values of Silicon Valley.
3 | AI will tell us what we want to hear. AI will regularly deliver what humans have always sought: affirmation of our desires. It will tend to echo our inclinations and provide rationale for doing what we already want to do, from ending an unsatisfying marriage, to splurging on new shoes, to cutting off a “toxic” parent or friend. Where traditional wisdom applied the brakes to many human inclinations, AI will often push the accelerator.
4 | AI will replace things machines cannot replace. Technology provides efficiency and knowledge. AI will deliver these, marvelously. It will also offer substitutes for the most essential of elements of human life, including intimacy and affection. Pursuing these gifts apart from real relationship with God and others is like trying to slake thirst with saltwater. It leaves us only more parched – and ultimately kills. Yet AI will promise otherwise. It offers relationship without distraction, fatigue, or judgment – and most of all, without asking anything of you. Real, flawed humans will find it hard to compete.
5 | AI will weaken our capacity to do anything it does for us. Like the muscles of astronauts who spend time in low-gravity environments, our ability to do anything that we regularly outsource to AI will atrophy over time. That trade-off isn’t always bad. (For example, having books diminished people’s “muscles” for memorization.) But if we hope to retain the most vital human abilities – capacities vital to thought, relationships, and wellbeing – we’ll need to carve out times and places free from AI. We’ll need to choose to use the “muscles” we hope to retain, from basic reasoning and logical thought to the clear articulation of ideas and feelings. This will be especially vital for children who begin using AI in their most formative years.
Again, let us be clear: AI will also provide unparalleled benefits. That’s precisely why it will be so hard to discern when to receive its gift and when to turn them down. Deep wisdom and fresh insights will be needed daily as the capacity of AI expands at breathtaking speed. But at every step we dare not forget: we must make the decisions … or the decisions will be made for us.
Let’s do that – thoughtfully, prayerfully, together – starting here. The pages ahead invite us to begin.
JEDD MEDEFIND President of Christian Alliance for Orphans
Three questions to consider when evaluating a new AI opportunity:
What will be gained by using AI for this task and what will be lost?
How might using AI in this way form me over time: My strengths and capacities? What I desire and love? My character?
How might using AI in this way enrich or weaken my relationships: With my family? With friends? With those I’d typically encounter in daily life? With God?
ABOUT THE AI WORKING GROUP:
The following content of this document was developed as an outcome of a CAFO AI Working Group: a group of 13 leaders who met during the spring/summer of 2025 to discuss key considerations regarding the ethical and effective use of AI tools in ministries serving orphaned and vulnerable children and families. All of the contents herein emerged from this collaboration to better serve the CAFO community.
Four Key Principles When Using AI
As Christian leaders, caregivers and practitioners committed to serving vulnerable children and families wisely and well, the following principles must be considered when engaging AI tools and technology.
1 | Human Dignity: We must always seek to honor people created in God’s image (Gen. 1:27).
• AI tools should enhance, not replace human connection. Automating repetitive tasks with AI can help free up time for connection and creativity, but it should never be used as a replacement or intermediary in relationships.
• AI tools should never be used for harmful purposes. Ethical use of AI must consider the dignity and protection of all people, which includes never creating malicious content or uploading personal, sensitive or identifiable information about people (including names, images, addresses or personal details).
• AI tools should be corrected when they reinforce misinformation, bias, or harmful views of people. We must use discernment when engaging AI tools and quickly correct assumptions, hallucinations or bias. Discernment, fact-checking and human wisdom must be applied.
2 | Privacy & Security: We must protect the private data of all people connected to our work, follow the laws and regulations that govern our work, and ensure the protection of vulnerable children and families.
• AI tools should never compromise the security and safety of children, their families, or those working alongside them. We must be vigilant about child protection and never use AI tools in a way that could lead to identifying private data that could put a child or family at risk.
• AI tools must be vetted to ensure data is not stored, shared or hosted in a way that violates regulations and industry best practices. If your work is governed by HIPAA or GDPR, for example, you will need to be vigilant in assessing each tool’s data storage policies and location to ensure compliance.
3 | Acceptable Use: We must exercise wisdom when discerning the best use of AI, establishing governance, and training employees on its use and associated risks (1 Corinthians 6:12).
• The use of AI tools should be governed by organizational policy. Each organization will need to establish a policy to define who can use AI tools and for which purposes (this may differ by department.) Policies should align with organizational values and existing child protection, data protection and privacy policies.
• AI tools require training and support for safe, ethical use by employees. Each organization will need to train employees on acceptable use and the associated risks of AI tools. It is recommended that organizations take special caution when using AI for any external communication and content creation.
Accountability & Transparency: We must hold one another accountable for the ethical use of AI (Gen. 4:9) and should openly share when we use the tools both internally and externally.
• AI tools should never be used in secret. When using AI, it is important to disclose its use. A simple disclaimer – edited with AI assistance, or photo generated by AI – can provide greater accountability and transparency.
• Humans must review all outputs from AI tools. Machines cannot be held accountable, and so it is each individual’s responsibility to verify AI outputs, always keeping a “human in the loop.”
Ten Questions for Organizations to Consider
1. How do we ensure AI use does not involve identifiable or sensitive personal data?
2. What criteria should our organization use to approve or reject an AI tool?
3. How do we distinguish between internal AI use and public-facing outputs?
4. What safeguards prevent copyright or intellectual property violations?
5. How should AI-generated content be disclosed or attributed?
6. What rights do participants have if AI tools are used in meetings?
7. Who is responsible for monitoring compliance with AI policies?
8. What steps should be taken if AI use raises ethical or safeguarding concerns?
9. How do we keep AI practices updated as tools and risks evolve?
10. What process exists for escalating AI-related questions or issues?
Recommended Next Steps:
1 | Develop your organization’s AI Use Policy with input from multiple teams and departments. Form an internal working group to assess current use, review tools and guide policy development, implementation and training.
2 | Start with pilot use cases that may be low-risk and high-reward. What repetitive tasks are currently being done by humans that AI could automate? Focus on these processes first to see a bigger impact and reduce risk.
3 | Establish the difference between public and private data and educate employees on each. What kind of content and data can NEVER be used with AI, what type of content and data is public and can be used with AI, and how to tell the difference. Regular conversations and training on this will help educate employees on appropriate use and protect everyone involved.
4 | AI is rapidly evolving. Organizations should regularly review policies, tools, best practices, and resources. It is recommended that each organization establish feedback loops and committees to support the ongoing learning that occurs as you implement AI tools into your work.