Last year I was interviewed by Dominique Garingan for her dissertation on algorithmic literacy, and thought I would share my thoughts that arose in relation to that conversation with you here too. She also published an article about her dissertation findings in the most recent issue of Canadian Law Library Review: “Advanced Technologies and Algorithmic Literacy: Exploring Insights from the Legal Information Profession“.
Merriam-Webster Dictionary defines “algorithm” as “a step-by-step procedure for solving a problem or accomplishing some end“. Algorithmic literacy, in turn, is the understanding of how computer systems apply algorithms so that users can apply critical thinking on how to approach them. Many applications labeled as artificial intelligence are based on algorithms.
Algorithmic literacy is important to understand, especially if we’re using algorithm-based applications in ways that are affecting what we do in the world. Many AI applications are complicated, and in-depth knowledge about how they would be deployed is not feasible for many people. That said, understanding how they’re used and what goes into them at a higher level and what comes out of them is more important than the algorithms themselves.
In tandem with algorithmic literacy, data literacy is important because the data that goes into these systems is so directly related to what comes out that if we don’t understand the data, then understanding the algorithm itself almost doesn’t matter. This is because much of the input and output processes are dependent on attributes of the data including its quality, treatment, annotations, and selection, as well as the programming that goes into analyzing it.
The thing about algorithms, like many computer applications, is that you can force them to spit out answers. Spreadsheets will readily give numbers to 10+ decimal places based on made up numbers, but whether the answers are acceptable for particular needs is more complex to understand. Related to algorithmic literacy, having a better understanding of the technology and what goes into it may help adoption if it’s found that they are appropriately robust. Many people talk about AI as if it’s magic “Oh, I’ll do this and then this comes out” as if it’s an answer box.
It’s the responsibility of the people supplying these tools and using them to verify what is in fact happening. Many developers are developing tools and want them to be adopted faster, but they can’t necessarily answer questions about fairness or compatibility of what data and technology they’re using or how they’re deployed. There could be room for standards and some kind of endorsement system. I don’t know exactly what it would look like, but it would need to be fairly sophisticated because these are complex systems.
There’s a culture in technology development of moving fast and breaking things. So, overlaying some structure on that approach makes sense. If there was some external validation of the quality of these products, that would help adoption. Many people say things that, I suspect, are not true about the way that the systems work, and potential users don’t always have the data or algorithmic literacy to ask the questions that would elucidate that. So, to use the applications users would have to take it on faith that the systems are proper for use. In some cases organizational reputation gives assurance that this is the case and in others the underlying systems may be made open source, so they can be closely examined. Outside of these means it is difficult for even experts to assess suitability.
Transparency and accountability are an important considerations for the community to assess. There’s a great deal of potential, especially if you look at it as incremental improvements. For example, if systems are designed with checks where a person reviews their work, they don’t have to be right all the time to be useful. This means that there may be a great deal of room for an assistive technology that aids instead of replaces human work.
The algorithms that form an AI system are in some ways neutral. They don’t care, they just reflect us back at ourselves like mirrors. If we feed in unjust data, then we get unjust outputs, and it may not always be clear that that’s what’s happening. These things need to be dealt with before we can trust algorithmic systems.
When I think about the factors that would influence greater adoption, having assurance that systems were reliable would enhance the willingness of the community to accept them. In contrast, continued concerns about the quality and fairness of the data, and how suitable it is will deter adoption.
From my perspective, there’s a great deal of potential for things to be excellent. There’s room to build wonderful tools that will help us do our work better and make things more efficient than before. To get there I would prioritize the data that’s going in, the structure of it, and whether or not the uses it’s being put to can be justified. Now in law, the development of data inputs is behind the technology that can be used with it. For the time being the data inputs in some ways are more important than the algorithms themselves.
This post was originally published on Slaw here.