Home » Posts tagged 'Black boxes'
Tag Archives: Black boxes
Computers and Writing 2016 Presentation Transcript
This is the transcript from my 2016 presentation at Computers and Writing, for the panel H1: Brad Pitt Wants to Know “What’s in the Box”: How Technology, Rhetoric, and Disability Studies Play a Key Role in Breaking (open) Black Boxes. You can also watch the presentation (captioned!) on youtube, since I gave it virtually. The tags were #cwcon and #h1 for the conference and panel, respectively.
So. Hello everybody. My name’s Alyssa Hillary. I’m talking to you from a computer screen because I am ever so slightly graduating now. I’m an engineer at the University of Rhode Island – not a recovered engineer!
So, the idea of the black box started off in engineering and in sciences. We have a lot of processes to deal with, we have a lot of formulas to deal with, and there’s just too much for one person to know, most of the time, to know how absolutely everything works. Within a given field, we’ll say, OK, we’re not going to worry about understanding all the details of this process over here. We’re just going to say, we care what goes in, and we care what comes out. And everything else is someone else’s problem. Obviously, somebody needs to care about what happens inside that black box, but … not everybody.
Then we have the humanities getting hold of the idea, yay, cross-fertilization between disciplines, I love it when that happens. At least, formally. Informally, I figure the humanities since, oh, forever.
But here’s the question: What do different people, different fields, different disciplines, different interdisciplinary combinations, black box? That depends on where we stand, where we sit, where we retreat to when it’s time to curl up in a ball and shake.
As an engineer and as a disability studies person, I see quite a bit of it going on. Technology. Now at computers and writing, we’re going to be better about not black-boxing everything technology ever than a lot of people will, computers is right in the name.
But, a lot of the time, engineers and scientists and technology people tend to black box, tend to ignore what are the cultural forces behind our technology, how our technologies are going to get used. We don’t think about why our technologies are going to get used the way they are. We just think about, here’s how it’s getting used, we’re not going to worry. At one extreme, I’ve seen people talking about how to do the death penalty as purely an engineering problem. This is something that is currently happening, how do we do it more efficiently, not thinking about the societal factors behind why are we even doing this? Less extreme, still pretty common.
In the humanities, we often treat technological developments, or we have often treated technological developments, as something that happens and the technologies appear from just about nowhere. So they’re treating the engineering process as a bit of a black box. Time goes in, technology comes out. Or, for mathematicians, coffee goes in, theorems come out.
Now, in the digital age, and just in general over time the amount of technology we have to deal with increases. So what do we treat as a black box, why do we do it? Digital humanities, we use a lot of software. We use the Internet. How does the Internet work? It’s a bit of a hodge-podge, we’ve got websites here, websites there, different protocols, http, https, fttp, and on and on, and I don’t think anybody knows how all of it works. And this is what we’re working with. How much do we treat as a black box and how much do we try to understand?
We’ve got statistical analysis of online texts. If we don’t understand what our statistical analysis program is doing, how do we know if it’s any good? How do we evaluate it with no idea what’s in the box? But, how much time do we have to figure out what’s in the box? Who are we trusting, and how hard would it be to take the time and energy not to trust them?
Some of our software? Maybe we don’t try and get into the nitty-gritty of how our web browser works, maybe we don’t try and figure out why Google’s giving us the results it gives us – that’s a hugs black box, it’s proprietary, good luck figuring out exactly how it works, though search engine optimization is a thing, they’re trying to open the box, they do a pretty good job.
And there’s our technology. Now, when we’re interdisciplinary, or when we’re trying to be interdisciplinary, generally we’ve got a few “home” disciplines. Not as in, there’s a few specific disciplines where we get interdisciplinary, I think everyone has the potential to try at least. But you, in particular, me, in particular. I am an engineer. No matter what I’m approaching, I’m still thinking, to some extent, like a mathematician, like an engineer, like a disabled person. I bring those with me, and what things I’m likely to treat as a black box in my other work relates to my initial upbringing as mathematician, engineer, disability studies person.
We’re less likely to black box the stuff we think we can understand! So the more it relates to one of our home disciplines, the less likely we are to black box it. And we’re more likely to black box what we don’t understand! As an engineer, I’m more likely to try to get into the nitty-gritty of technology than someone who’s original training was in literature. It’s just a fact. But if we’re trying to be interdisciplinary, if we’re trying to combine multiple things, maybe we want to get into the nitty-gritty of stuff that normally, we wouldn’t. How?
I don’t actually know how. I know that working with people across atypical combinations of disciplines is one way you could potentially do it. Maybe I don’t know how to understand every piece of literary analysis, I know that I don’t, but I’m working with someone who does, and they can unpack that black box.
Maybe Sam, also on this panel, can unpack the black box of how do we get from one piece of rhetoric to the next, constructing spaces of advocacy and asking, who is this organization even advocating for? And then, he’s unpacked that so I can understand it. I come back and say, now as an engineer I understand the technologies. I’m going to get into the question of how do we go from our current rhetorical position to the technologies it makes sense to try to build given that rhetorical position, given our current sociological, cultural background. How does this lead to given technologies and how do we use those technologies. I’m an engineer, I’ll unpack that part.
Which gets me into technology, people with disabilities, and how we get technology that’s useable by people with disabilities. Again, I’m an engineer. I’m going to hit the questions that are relevant to engineers. What’s in the box? So.
Who is this really for? Often times, when we’re designing technology that, at least in name, if for people with disabilities, is for disabled people, asssitive tech. Who’s getting asked what the needs are? It’s parents, it’s professionals, it’s teachers. It’s not the end users, disabled people, who will go home and use this technology regularly. So, my communication application, that I use fairly regularly, I have Proloquo4Text on my iPad. They’re actually pretty good at getting feedback from disabled people who use their application. A lot of others … aren’t.
You get people advertising technology for communication supports based on testimony from parents, professionals, teachers, but not the people who are using these applications, the people for whom these applications are our primary voices.
So who get’s asked what the needs are? Who’s designing our technology? That’s another black box that I’m situated to unpack. See, I know of a group that’s trying to design technology to track environmental factors related to meltdowns for autistic people and I am the, as far as they know, the first autistic person that they’ve met. So who’s designing this, and who are they talking to, if, prior to me, they aren’t talking to any autistic people? In practice, who is, the end user? I don’t know. It clearly isn’t us.
Where do we stand? Where do we sit? What’s a black box, and to whom? I tried to build communication software, very, very naively, definitely not working with the best tools, despite being an engineer I am not a computer scientist. Trying to treat some communication problems for autistic people, for neurodivergent people in general, as a problem of translation. My friends who know me better can understand me more than strangers can. Because they know from experience what context I’m trying to put in, they know how my syntax changes under stress. They know how my communication tends to differ from standard, white, abled, middle-upper class, neurotypical, cis-het, every other kind of normative speech there is. And they can translate from what I’m saying to what they should understand. And they can do this for other people who don’t know me as well.
Can we apply machine translation or computer assisted translation to help us do this? In that position I had to get into the nitty-gritty of how does computer assisted translation work. I had to learn enough about how it worked to start unpacking that black box, in a way that, if I were just trying to translate between two “standard” languages, I wouldn’t need to unpack it, because I’m using the software out of the box, as it stands.
It’s about what our position is, in the world, what we’re trying to do, what we’re trying to understand. That tells us what black boxes we need to open up and ask, what’s in this box, and which ones we can leave alone and say, y’know, someone else can open that one.
Thank you everybody.