My name is Alex Yancey (he/him), and I am going to be making use of my existing blog for this class. However, I unpublished the post, so it won't be visible on my home page. You need the URL in order to view the blog.
I will update this blog post when the next assignment requires it.
While researching this topic, I found lots of important discussions happening regarding autonomous vehicles. Some users are technophiles who can hardly wait for the ubiquity of vehicles that drive themselves, and all the luxuries and conveniences that comes with. Some feel that this takes away yet more agency from a populace that is already struggling with its relationship with automation and loss.
The Tesla feels like a high tech toy to many of its loyal fans, but Autopilot related crashes are coming into the news more often. As Tesla's autopilot is a Level 2 autonomous system, they repeatedly try to signal that the driver must remain attentive at all times. However, does that deviate from their marketing language and the hype around this killer car feature?
So for those concerned with safety and regulating how autonomous vehicles should operate in the future, my number one recommendation is to first consider your ownership model and what autonomy means to you. If you have an aversion to owning or driving a car, buying a Tesla seems like a wonderful idea.
If you're a current Tesla owner and think Autopilot is your ticket to traveling like a celebrity, you'll be in for a rude awakening if you rely on it solely.
Big Data: Privacy
User privacy is a sticky subject for me. You could call me a privacy hypocrite, since I value my own privacy highly while using other peoples data for my own gains. It's become clear that even if you don't care about your own privacy, others do and are willing to pay money for it.
With new European privacy laws such as GDPR, the public has begun to take notice of how much their personal information is being collected by companies. In the United States however, our government has been known to use data mining to create detailed profiles on individuals. This practice was made possible by the Patriot Act which gave law enforcement agencies broad powers to spy on Americans without a warrant or probable cause. The United States seems to still be in the wild west regarding privacy laws, except for California ironically, which has enacted the most stringent privacy protections in the country. The CCPA (California Consumer Privacy Act) has created a lot of buzz because it requires all companies doing business with California residents to obtain consent from them before collecting any personally identifiable information.
Heavy handed laws might be one way to tackle this issue, however I would much rather try to educate the average person on how much of their data is being used to make other people money. You could also go with the libertarian approach and simply PAY them whenever their data is used against them. I'm not sure what the right answer is, but I think it's important we find one soon. We need to find a way to balance the needs of both sides. Companies should have the ability to make money off user data, but users should have the power to decide who get
The digital divide is an issue that's been around for decades. It refers to the gap between those who have access to computers and Internet services, and those who don't. As a result of this disparity, there are people in the world today who do not have access to all the opportunities that come with being online. This issue particularly affects black and Hispanic communities, as well as rural areas where residents may be more isolated from technology. Especially since COVID, jobs which require a reliable internet connection at home are extremely common. While most of us can agree that the digital divide is something we want to eliminate, it has been difficult to find a solution. In fact, one of the biggest obstacles to closing the gap is the cost of equipment and internet service. While some families can afford to buy a computer or smartphone for their child, many cannot. And even if they could, these devices often need to be replaced every few years due to technological advancements. Even then, not everyone wants to go through the hassle of learning how to use a new device.
Bias in Algorithms
This chapter really opened my eyes up to bias in algorithms. I had no idea that a program could be programmed with an implicit bias. I think this is very interesting because it shows how people can have biases and we may not even realize they are there. When you write an algorithm, you are testing and weighing it with your own biases and experiences. It is important to recognize that the results of these tests will be skewed if you do not test for those biases. Of course, as a white guy, these topics can easily be in my blind spot. I would like to hear from others on their thoughts about the article and what they learned. I also think that this is something that we need to keep in mind when we create our own algorithms. We should always be aware of the biases that we bring into them.