Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

Ethical Considerations for Civilian AI Developers Using Open-Source Military Data

The bibliography file is located at citations.bib. The sources are freely accessible as of 2025-04-28 with no paywalls.

Civilian AI developers working with open-source military data need to be aware of the ethical and legal challenges. Even though this data is publicly available, it comes from military contexts where AI-driven decisions can have serious life-or-death consequences. The NSCAI Final Report (n.d.) highlights the risks of using AI in military operations, particularly around accountability and compliance with international humanitarian law (IHL).

Developers need to build IHL principles, such as the proportionality standard, into AI systems to reduce harm to civilians and ensure these technologies are used lawfully. (Woodcock, 2024) A key part of this process is carefully reviewing training data to identify and eliminate hidden biases that could lead to wrongful targeting or misclassification which contribute to the “accountability gap” in AI decision-making. (Crootof, 2022)

Because military AI technologies often have dual-use capabilities, it’s essential to implement strong access controls and governance frameworks to prevent misuse. (Paoli & Afina, 2025) One example is the “mosaic effect,” where combining multiple pieces of open-source intelligence can unintentionally cause harm or violate policies.

Furthermore, the United Nations Secretary-General has emphasized that decisions involving human life should never be left solely to algorithms or driven by commercial interests (United Nations, 2024). Human oversight must be present in all critical AI applications that use military data.

In conclusion, civilian AI developers using open-source military data should prioritize transparency, fairness, and strong ethical oversight to manage the risks and ensure that AI is developed safely and responsibly in accordance with international law and ethical standards. (Roumate, 2020; Khan, 2023)

References

  • Crootof, R. (2022). AI and the Actual IHL Accountability Gap. SSRN.
  • Paoli, G. P., & Afina, Y. (2025). AI in the Military Domain: A Briefing Note for States. UNIDIR.
  • Roumate, F. (2020). Artificial Intelligence, Ethics and International Human Rights Law. The International Review of Information Ethics, 29.
  • Khan, S. Y. (2023). Autonomous Weapon Systems and the Changing Face of International Humanitarian Law. International Law Blog.
  • United Nations. (2024). Secretary-General’s Remarks to the Security Council on Artificial Intelligence.
  • Woodcock, T. K. (2024). Human/Machine(-Learning) Interactions, Human Agency and the International Humanitarian Law Proportionality Standard. Global Society, 38(1).
  • National Security Commission on Artificial Intelligence. (n.d.). Chapter 4 – NSCAI Final Report.

Dataset Structure

BibTeX file

Usage

This dataset is intended for:

  • Researchers studying military AI ethics
  • Policy analysts examining IHL compliance
  • Developers working on defence-related AI systems
  • International relations scholars

Limitations

  • This is only a small sample of what is publicly available
    • There are many more reputable, authoritative, and comprehensive sources
    • See also the Red Cross and other United Nations documents for more information on AI and IHL
    • This is the beginning of a new research area in international law and AI ethics

Licence

CC-BY-4.0 (assumed - verify original source licences for specific entries)

Downloads last month
2