We often hear that “students are cheating,” “employees are cutting corners,” or “people are getting lazy.” But these behaviors rarely happen in isolation. They emerge from the conditions surrounding the learner, the worker, and the task.
In learning and development, we know that behavior is shaped more by systems and incentives than by knowledge alone (Schein, 2017). If we want responsible and meaningful use of AI, we have to design environments that make responsible use the natural choice — not the exception.
AI Misuse Is Often a Design Outcome
In landscape architecture, the term desire paths refers to the informal trails worn into the ground where people repeatedly walk the route that best matches their real intentions and movement patterns. These paths emerge when formal design does not align with what people actually need (Alexander, Ishikawa, & Silverstein, 1977).
The same thing happens in learning and work environments.
If the easiest path available is a shortcut, people will take it.
Norman (2013) explains that environments reflect design decisions — intentional or accidental — and people interact with systems in ways that make sense based on real pressures and goals.
With AI, shortcuts do more than speed up tasks. They reshape how people learn, think, and solve problems.
Klein (2023) shows that people respond rationally to the incentives around them. So AI misuse becomes predictable when:
• Expectations for responsible use are unclear
• Output is valued more than learning or quality
• Training focuses on access, not skill-building
• Tools are easier to misuse than to use thoughtfully
• Systems reward speed over reflection
When these conditions exist, misuse is not a personal failure.
It is a design outcome.
What Leading Organizations Are Doing
Some organizations are already shaping environments that encourage responsible, reflective, and skill-building AI use.
Arizona State University (ASU)
Created institution-wide AI literacy frameworks and guidance for both faculty and students.
Condition Designed For: Shared expectations and common norms.
IBM
Provides internal Responsible AI training and certification tied to real-world application.
Condition Designed For: Valuing thoughtful process and ethical reasoning.
Microsoft
Uses human-in-the-loop oversight and judgment checkpoints before AI-generated outputs are accepted.
Condition Designed For: Accountability and reflective decision making.
GitLab
Evaluates developers based on clarity, maintainability, and documentation rather than just speed.
Condition Designed For: Aligning incentives with integrity and craftsmanship.
Duolingo
Uses AI to scaffold learning and provide adaptive feedback — not simply deliver answers.
Condition Designed For: Supporting learning rather than bypassing it.
The pattern is clear: these organizations design conditions, not just tools.
How Organizational Learning Leaders Can Shape AI Use
1. Establish Shared Norms of Good Use
Move beyond policies. Offer examples, prompting questions, and modeling in real workflows.
2. Align Performance Expectations with Learning Quality
If speed is rewarded, speed wins.
If depth of reasoning is rewarded, reasoning develops.
3. Make Responsible Use the Easy Path
Provide template prompts, evaluation checklists, and reflection steps.
Responsible practice should be frictionless.
4. Build AI Capability, Not Just AI Access
The tool is not the competency.
The reasoning around the tool is the competency.
This Is System Redesign, Not a Tool Update
AI is already reshaping how people learn and work. Choosing not to act does not preserve the status quo; it simply allows the system to evolve by habit, convenience, and pressure instead of intention.
This moment calls for leadership and stewardship.
The conditions we create now will shape how the next generation learns, works, and makes decisions.
References
Alexander, C., Ishikawa, S., & Silverstein, M. (1977). A Pattern Language. Oxford University Press.
https://arl.human.cornell.edu/linked%20docs/Alexander_A_Pattern_Language.pdf
AI Plus. (2023). Duolingo’s AI: The future of teaching.
https://www.aiplusinfo.com/duolingos-ai-future-of-teaching/
GitLab. (2023). AI-assisted development guidelines.
https://handbook.gitlab.com/handbook/product/ai/
IBM. (2023). Responsible AI training resources.
https://www.ibm.com/training/artificial-intelligence
Klein, G. (2023). Sources of Power: How People Make Decisions. MIT Press.
Microsoft. (2023). Human-in-the-loop AI.
https://learn.microsoft.com/en-us/azure/architecture/guide/responsible-ai/human-in-the-loop
Norman, D. (2013). The Design of Everyday Things. Basic Books.
OpenAI & Arizona State University. (2024). ASU–OpenAI Partnership Announcement.
https://news.asu.edu/20240118-university-news-new-collaboration-openai-charts-future-ai-higher-education
Schein, E. (2017). Organizational Culture and Leadership (5th ed.). Wiley.


