Friday 28 April 2023

Importance of Transparency in Artificial Intelligence

 A  new paper recently published by Dr Leishi Zhang as co-author on transparency in Artificial Intelligence systems raises some interesting questions.


https://researchspace.canterbury.ac.uk/94734/the-impact-of-system-transparenc 

The Impact of System Transparency on Analytical Reasoning

CHI EA '23: Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems

Article No.: 274Pages 1–6

https://doi.org/10.1145/3544549.3585786


Abstract

In this paper, we present the hypothesis that system transparency is critical for tasks that involve expert sensemaking. Artificial Intelligence (AI) systems can aid criminal intelligence analysts, however, they are typically opaque, obscuring the underlying processes that inform outputs, and this has implications for sensemaking. We report on an initial study with 10 intelligence analysts who performed a realistic investigation exercise using the Pan natural language system [10, 11], in which only half were provided with system transparency. Differences between conditions are analysed and the results demonstrate that transparency improved the ability of analysts to reason about the data and form hypotheses.


References

  1. Ashraf Abdul, Christian von der Weth, Mohan Kankanhalli, and Brian Y. Lim. 2020. COGAM: Measuring and Moderating Cognitive Load in Machine Learning Model Explanations. Association for Computing Machinery, New York, NY, USA, 1-14.
    https://doi.org/10.1145/3313831.3376615
     
    Sule Anjomshoae, Amro Najjar, Davide Calvaresi, and Kary Främling. 2019. Explainable Agents and Robots: Results from a Systematic Literature Review. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems (Montreal QC, Canada) (AAMAS '19). International Foundation for Autonomous Agents and Multiagent Systems, Richland, SC, 1078-1088.
     
    Jessie Chen, Katelyn Procci, Michael Boyce, Julia Wright, Andre Garcia, and Michael Barnes. 2014. Situation Awareness-Based Agent Transparency.Google 
    https://doi.org/10.21236/ADA600351
     
    Karl de Fine Licht and Jenny de Fine Licht. 2020. Artificial Intelligence, Transparency, and Public Decision-Making. AI and Society 35, 4 (2020), 917-926. 
    https://doi.org/10.1007/s00146-020-00960-w
     
    D. Delmolino and M. Whitehouse. 2018. Responsible AI: A framework for building trust in your AI solutions.Google Scholar
     
    Penny Duquenoy, Donald Gotterbarn, Kai Kimppa, Norberto Patrignani, and B.L.William Wong. 2018. Addressing Ethical Challenges of Creating New Technology for Criminal Investigation: The VALCRI Project. 31-38. 
    https://doi.org/10.1007/978-3-319-89297-9_4
     
    Heike Felzmann, Eduard Fosch Villaronga, Christoph Lutz, and Aurelia Tamò-Larrieux. 2019. Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns. Big Data & Society 6, 1 (2019), 2053951719860542.
    https://doi.org/10.1177/2053951719860542
     
    Sam Hepenstal, Neesha Kodagoda, Leishi Zhang, Pragya Paudyal, and B. L. William Wong. 2019. Algorithmic Transparency of Conversational Agents. In IUI Workshops.
     
    Sam Hepenstal, B.L. William Wong, Leishi Zhang, and Neesha Kodogoda. 2019. How analysts think: A preliminary study of human needs and demands for AI-based conversational agents. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 63, 1 (2019), 178-182.
    https://doi.org/10.1177/1071181319631333
     
    Sam Hepenstal, Leishi Zhang, Neesha Kodagoda, and B. L. William Wong. 2020. Pan: Conversational Agent for Criminal Investigations. In Proceedings of the 25th International Conference on Intelligent User Interfaces Companion (Cagliari, Italy) (IUI '20). Association for Computing Machinery, New York, NY, USA, 134-135.
     
    Sam Hepenstal, Leishi Zhang, Neesha Kodagoda, and B. L. william Wong. 2021. Developing Conversational Agents for Use in Criminal Investigations. ACM Trans. Interact. Intell. Syst. 11, 3-4, Article 25 (aug 2021), 35 pages.
    https://doi.org/10.1145/3444369
     
    Intel.gov. 2022. Principles of artificial intelligence ethics for the intelligence community. https://www.dni.gov/index.php/features/2763-principles-of-artificialintelligence-ethics-for-the-intelligence-community. Accessed: 2022-06-16.
     
    G.A. Klein. 1993. A recognition-primed decision (RPD) model of rapid decision making. In Decision Making in Action: Models and Methods, G.A. Klein, Judith Orasanu, R. Calderwood, and Caroline E. Zsambok (Eds.). Norwood: Ablex Publishing Corporation, 138-147.
     
    G.A. Klein, R Calderwood, and D MacGregor. 1989. Critical decision method for eliciting knowledge. Transactions on Systems, Man, and Cybernetics 19, 3 (1989), 462-472.
    https://doi.org/10.1109/21.31053
     
    G. Klein, J. K. Phillips, E. L. Rall, and D. A. Peluso. 2007. A data-frame theory of sensemaking. In Expertise out of context: Proceedings of the Sixth International Conference on Naturalistic Decision Making, R. R. Hoffman (Ed.). Lawrence Erlbaum Associates Publishers, 113-155.
     
    David Leslie. 2019. Understanding artificial intelligence ethics and safety: A guide for the responsible design and implementation of AI systems in the public sector.The Alan Turing Institute (2019). https://doi.org/10.5281/zenodo.3240529 
     
    Arun Rai. 2020. Explainable AI: from black box to glass box. Journal of the Academy of Marketing Science 48, 1 (January 2020), 137-141.
    https://doi.org/10.1007/s11747-019-00710-5
     
    R Roovers. 2019. Transparency and responsibility in artificial intelligence. A call for explainable AI.
     
    Ben Shneiderman. 2020. Human-Centered Artificial Intelligence: Reliable, Safe & Trustworthy. International Journal of Human-Computer Interaction 36, 6 (2020), 495-504. 
    https://doi.org/10.1080/10447318.2020.1741118
     
    Aaron Springer and Steve Whittaker. 2019. Progressive Disclosure: Empirically Motivated Approaches to Designing Effective Transparency. In Proceedings of the 24th International Conference on Intelligent User Interfaces (Marina del Ray, California) (IUI '19). Association for Computing Machinery, New York, NY, USA, 107-120.Google ScholarDigital Library
    https://doi.org/10.1145/3301275.3302322
     
    B.L.William Wong. 2004. Data analysis for the Critical Decision Method. The Handbook of Task Analysis for Human computer Interaction (01 2004).
     
    B.L.William Wong and Neesha Kodagoda. 2016. How Analysts Think: Anchoring, Laddering and Associations. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 60, 1 (2016), 178-182.
    https://doi.org/10.1177/1541931213601040
     
    B.L.William Wong and Margaret Varga. 2012. Black Holes, Keyholes And Brown Worms: Challenges In Sense Making. Proceedings of the Human Factors and Ergonomics Society Annual Meeting 56 (10 2012), 287-291. 
    https://doi.org/10.1177/1071181312561067

No comments:

Post a Comment

Trustworthy Insights: A Novel Multi-Tier Explainable framework for ambient assisted living

  Trustworthy Insights: A Novel Multi-Tier Explainable framework for ambient assisted living Kasirajan, M., Azhar, H. and Turner, S. 2023.  ...