January/February 2025
By Miko McFarland and Jon Hibbard
Throughout history, technological innovations have consistently reshaped the way humans live and work. One of the latest of these significant shifts is the rise of generative artificial intelligence (AI). Even though AI has been lingering in the background for the last several decades, recent advancements in machine learning models—such as OpenAI’s ChatGPT—have brought generative AI into the mainstream. A recent research survey notes that generative AI is seemingly being adopted more quickly than both the internet and personal computers, with nearly 40 percent of respondents reporting having used it in the week before being surveyed (Bick et al. 2024).
As generative AI picks up momentum, so do the ethical challenges that come with its use. Developments are currently outpacing regulatory safeguards to keep giant technology firms accountable, so international educators must temper promises of innovation with responsible use. Moving forward, a key question to keep front and center will be: How can international education professionals ensure that adoption of generative AI is transparent, free from bias, accurate, and secure?
In this article, we explore the possibilities that generative AI brings to international higher education, while also considering the ethical issues that drive the need for established guidelines and best practices. Ultimately, we challenge practitioners to prepare for an AI-augmented future that still centers the human connection at the heart of international higher education. By thoughtfully integrating generative AI into their workflows, international educators can better serve those who are the “why” driving what they do: students.
Opportunities for Innovation
Generative AI refers to a type of artificial intelligence capable of creating new content—such as text, images, audio, and video—by learning patterns from existing data. Unlike traditional AI, which primarily fetches and repeats existing content, generative AI uses advanced algorithms, like machine learning, to generate something new. Using an analogy, if traditional AI is playing a board game with a more limited set of possibilities, like checkers, generative AI is playing the more abstract game of chess—while it learns to build a better chess game.
As a recent report notes, the rapid incorporation of generative AI into “everyday applications and software across the internet and mobile devices”—including those from Google, Microsoft, and Apple—means that “users are seeing chances to opt in evaporate” (Madden et al. 2024, 47). As AI features become an unavoidable part of the modern workplace, recognizing and leveraging them can help international education professionals enhance their work in a variety of areas, including productivity and efficiency. With a few simple prompts, generative AI tools can, for example, create slide decks full of student-friendly content for predeparture presentations or orientations, saving practitioners hours of work. And AI-powered assistants can transcribe meetings, highlight key points, and summarize action items, allowing staff to engage fully in discussions.
These tools can also assist international educators in areas in which they might have less training, including data analysis and translation support. AI sentiment analysis tools swiftly interpret vast amounts of qualitative survey data, unveiling insights that can inform decision-making, and predictive analytics can process enrollment data to identify trends and forecast student demographics, enabling tailored recruitment strategies. Advanced language-translation tools can enhance staff-student interaction by facilitating better communication across language barriers.
With generative AI, international educators can augment their roles and workflows, thus focusing their expertise on building relationships, developing innovative programs, and providing better support to students. In other words, as generative AI becomes more adept at routine tasks, human beings can focus even more on being human.
Drawing upon the same skills and opportunities that they promote for students, international education practitioners can engage in experiential, collaborative learning as they incorporate generative AI tools into the international office workflow. Initial efforts can focus on experimenting with user-friendly platforms (e.g., Perplexity, Claude, or ChatGPT) to streamline tasks with a heavy administrative burden and then discussing the results with colleagues to deepen understanding and gain new perspectives. Being curious, adaptable, and open to learning are the mindsets that make up the fabric of international higher education. They are also the mindsets that will prepare those working in the field for a future with generative AI.
Handle with Care: The Ethical Debate
Although generative AI tools offer promise in terms of freeing up time and spurring creativity, there are ethical concerns surrounding their use. For example, AI-generated content is already blurring the lines between the human and the machine, as it is becoming more difficult to discern which text, images, videos, and audio come from people versus from AI (Feizi and Huang 2023). Generative AI is also notorious for its “hallucinations,” or fabricated content, and its biases. The development of generative AI tools also remains largely unregulated, which raises many questions about accountability, data use, and privacy.
What does this virtual blurriness mean for international educators, whose mission is centered on facilitating real-world experiences? Of the ethical concerns impacting international higher education, three of the most pressing are transparency, algorithmic flaws, and data privacy.
Transparency
Recent research has revealed widespread generative AI adoption by school-age young people; according to one report, seven out of 10 U.S. teenagers are using generative AI (Madden et al. 2024). Just as students are taught to acknowledge their sources in academic work to avoid plagiarism, international education professionals face questions about acknowledging the parts of their work that are augmented by generative AI.
- Should students, for example, be made aware when guidance from their adviser was informed by AI?
- Is it appropriate to use AI-generated photos to depict education abroad experiences in recruitment materials?
- Is it clear to students that they are engaging with a chatbot powered by generative AI and not a live person?
These are the types of questions that can guide international educators’ discussions about standardizing best practices for responsible AI use in their work. When using generative AI, practitioners should be open about its involvement in the content that they produce and obtain consent to use AI in the development process when necessary. Students and families have a right to know if the information they’re receiving has been influenced or created by generative AI.
International education professionals can take steps toward more transparent AI use by adopting a citation framework for generative AI use. This could include acknowledging the generative AI tool(s) used as an author or contributor, including transparency statements that clarify which content was created or manipulated by generative AI, and the date of content generation. Using a Chicago-style format, a citation might look like the following: “OpenAI. ChatGPT-generated response to query about study abroad trends. Generated October 15, 2024. Accessed via https://chat.openai.com.” Such citations ensure students are aware of the tool used, the purpose of its application, and the context in which the content that they are reading was generated.
Algorithmic Flaws: Bias and Hallucinations
Generative AI can analyze all types of data to generate content with speed and a high degree of precision. However, it is also capable of being biased and wrong. Algorithm bias happens because AI relies on historical data, which may contain embedded biases that AI cannot detect. For instance, an AI screening tool used to help rank international student applications could unintentionally favor students from certain countries of origin because, according to the historical data used by the tool, students from those countries have traditionally scored higher on standardized tests. The AI tool, which lacks the ability to account for such nuance, may thus inadvertently reinforce biases that directly impact accuracy and, in this case, equity.
Generative AI is also known to hallucinate, which involves the generation of responses that seem compelling but are fabricated. In a viral recent example of this tendency, users prompted ChatGPT to answer the simple question, “How many R’s are in the word “strawberry”?” (Eaton 2024). Depending on how the prompt was structured, the answer was likely to be wrong, or hallucinatory, with tools saying that there are two R’s, not three. This shortcoming of generative AI tools illustrates the importance of testing language learning models on international education knowledge that human practitioners know to be standard to the field to answer questions like the following:
- Can these tools generate study abroad advice that is consistent with what a specialized study abroad adviser would offer?
- Can they provide sound immigration guidance as reliably as a trained designated school official?
- Can they take into account important cultural differences that professionals navigate when advising students?
Because fields like international higher education are so nuanced, intercultural, and ever-changing, AI-augmented content still requires significant human oversight to validate its accuracy. Therefore, international education professionals using generative AI to create content and inform decision-making should be mindful of and trained in responsible use of this technology.
Moving forward, practitioners can benefit from training modules and professional development to improve their skills in prompt engineering, bias mitigation, and hallucination detection. International educators can begin their own skills development by pursuing free or inexpensive coursework, training modules, and certifications offered by their higher education institutions or organizations or by reputable online learning platforms.
Data Privacy
As AI becomes more integrated into international higher education work, a vigilant approach to data protection is paramount. International education professionals in the United States, for instance, adhere to Family Educational Rights and Privacy Act (FERPA) and Health Insurance and Portability Accountability Act (HIPAA) guidelines when collecting and disseminating student and staff data as well as other countries’ legal requirements surrounding data collection, such as the European Union’s General Data Protection Regulation (GDPR). It is important to keep in mind that generative AI tools, like language learning models, are primarily developed by private technology companies and open-source developers that use and analyze data inputs to refine and improve their products. This practice raises pressing questions around the storing and sharing of such data, chiefly concerning whether international educators should be sharing protected information in unlicensed, unregulated third-party platforms. The National Institute of Standards and Technology (NIST) emphasizes that “trustworthy AI is essential for ensuring that the technology will be used to benefit individuals and society,” underscoring the critical need for transparency, accountability, and ethical oversight in AI deployment (NIST 2023).
The emergence of chief AI officers at organizations and higher education institutions, like George Mason University, underscores the growing importance of responsible data use and risk mitigation, as protecting data is not just a legal obligation but also necessary for preserving the integrity of the field.
International educators can start by looking to their higher education institution or organization for guidelines and policies governing generative AI use and data sharing. A potential best practice in this area is using only generative AI tools available via trusted and institutionally approved platforms, like Microsoft 365 or Google Workspace, which offer enterprise-level security and data privacy guarantees under institutional licenses. Adopting these practices can help ensure compliance with regulatory standards and institutional policies, maintaining the trust of all involved in international higher education.
Closing the Gap: Preparing for an AI-Enhanced Future
Building generative AI literacy will enable international education practitioners to leverage the potential of this emerging technology and engage with these tools ethically and responsibly. However, it is important to be mindful of how the adoption of generative AI, even when it is done ethically, can widen existing gaps between higher education institutions and organizations. For example, well-resourced institutions with robust technological infrastructures can more easily integrate AI tools into their workflows, enabling them to provide more personalized student support, streamline operations, and develop innovative programs. Conversely, less-resourced institutions—including those serving historically marginalized populations—may struggle to onboard these technologies, further exacerbating inequities.
Therefore, it is important that international educators advocate for equitable access to technology, including AI tools and training, to bridge potential disparities. This can include sharing resources, expertise, and professional development in ways that are affordable and accessible. International education practitioners can look to institutional and field experts for guidance, resources, and connections; Northeastern University, for example, has developed a framework for AI literacy, offering resources such as "Guidelines for the Administrative Use of Generative Artificial Intelligence.” They can also pursue free or low-cost platforms for continued learning on generative AI developments, including podcasts, such as The AI Daily Brief; informative newsletters, such as The Neuron; blogs; and educational YouTube channels. Reliable online communities and discussion boards, such as NAFSA’s online forums, provide another avenue for international education professionals to disseminate and share best practices, ask questions, and discuss ethical considerations.
With intentional efforts to increase access to informed generative AI use, international educators can help close the equity gap to better ensure that all colleagues and students benefit from this technology, an approach that aligns with the inclusive values at the heart of international higher education.
Conclusion
Generative AI is reshaping the work that professionals in international higher education do by creating content and automating tasks to free up time for what truly matters in the field—building relationships, fostering cultural understanding, and supporting student learning. As these tools reshape international educators’ work, however, they demand intentional oversight to ensure that AI enhances, rather than undermines, practitioners’ efforts. This includes establishing best practices and providing training in areas like transparency, biases and inaccuracies, and data privacy to align with the values of equity, inclusivity, and ethical engagement that define international higher education.
Ultimately, generative AI is a tool—not a replacement—for the human connection that drives the work that international educators do. The future is not about choosing between technology and humanity; it is about leveraging technology to amplify what makes the work of this field meaningful. Now is the time to lean into the skills that international educators aim to foster in their students: curiosity, adaptability, and a commitment to create a more inclusive, connected world.
Author note: For the purposes of demonstrating its collaborative potential, some of the content of this issue of Trends & Insights was informed by and debated within OpenAI’s ChatGPT-4. All content was subsequently reviewed and edited for accuracy and relevance.
References
Bick, Alexander, Adam Blandin, and David J. Deming. 2024. “The Rapid Adoption of Generative AI.” Working Paper No. 32966. National Bureau of Economic Research. https://www.nber.org/papers/w32966.
Eaton, Kit. 2024. “How Many R’s in ‘Strawberry’? This AI Doesn’t Know.” Inc. Updated August 28. https://www.inc.com/kit-eaton/how-many-rs-in-strawberry-this-ai-cant-tell-you.html.
Feizi, Soheil, and Furong Huang. 2023. “Is AI-Generated Content Actually Detectable?” College of Computer, Mathematical, and Natural Sciences. University of Maryland (website). Updated May 30. https://cmns.umd.edu/news-events/news/ai-generated-content-actually-detectable.
George Mason University. 2024. “George Mason University’s Amanda Shehu Appointed Inaugural Chief Artificial Intelligence Officer.” George Mason University (website). Updated September 4. https://www.gmu.edu/news/2024-09/george-mason-universitys-amarda-shehu-appointed-inaugural-chief-artificial.
Madden, Mary, Alexandra Calvin, Andrew Hasse, and Amanda Lenhart. 2024. The Dawn of the AI Era: Teens, Parents, and the Adoption of Generative AI at Home and School. Common Sense. https://www.commonsensemedia.org/sites/default/files/research/report/2024-the-dawn-of-the-ai-era_final-release-for-web.pdf.
NIST (National Institute of Standards and Technology). 2023. “NIST Risk Management Framework Aims to Improve Trustworthiness of Artificial Intelligence.” National Institute of Standards and Technology (website). Updated October 20. https://www.nist.gov/news-events/news/2023/01/nist-risk-management-framework-aims-improve-trustworthiness-artificial.
Miko McFarland, director of university relations, oversees institutional partnerships for Barcelona Study Abroad Experience. McFarland is a NAFSA publications chapter author, NAFSA Trainer Corps member, and the 2024 recipient of NAFSA’s Lily von Klemperer Award. Her professional focus includes promoting best practices in education abroad program development, risk management, leadership, and emerging technologies.
Jon Hibbard is a problem solver and lifelong learner with an emphasis on strategic alignment of academic goals with global learning. As assistant director of academic integration and global learning at Northeastern University, Hibbard leads course equivalency and articulation efforts for faculty-led programs and traditional study abroad. With more than 10 years of experience, he continues to seek growth in technology integration for the betterment of higher education and global learning.