The essay should contain a brief abstract summarizing its contents and a clear thesis that states the position you are defending. It should offer arguments for the point of view you are defending and entertain objections to its thesis.
Two weeks ago, we discussed a thrilling topic in our class. It was about machines being in therapy.
It is brilliant how technology is capable of leaning by itself and how it can move up the skill level fast. Artificial intelligence can overcome human intelligence in many ways, which is why many companies and industries have depended on autonomy for accuracy and efficiency of performance. This by itself is a moral issue since it increases the unemployment rate especially in world developed countries. However, since employers and other shareholders are morally obligated to create value for stakeholders, creating value by using automation to reduce costs and improve performance can be argued to be just as moral, especially if it meant better products and services for customers and insuring environment safety procedures that can be monitored through computer progress reports. Robots have been proven to be effective in many industries but when it comes to consulting for a sensitive issue that needs careful participation it is important that we ask whether they’ll be safe to use and whether that is moral? One case of this incident is ELIZA, which I plan to examine in my paper. In doing so, I will explain why I believe it is immoral to incorporate machines in treatment of mental or psychological issues. My explanation will discuss issues of 1,2,3.
machines can learn and adapt.
there is a good discussion >> you’re very helpful… you’re very generious…. Very cute (Chat in application with Woeboy)
Problems with Woebot:
First, I really loved that Woebot uses emojis that makes it instantly more fun and expressive. But I personally sometimes get impatient. The problem with Woebot is that it immediately wants to ask me questions. The first question the robot should ask is “is there something bothering you?” Right away I should be able to vent out. The other problem is that is does not make me type freely, instead, it only offers me options of answers to choose from and responds based on them. This leads me to the third problem, sometimes I don’t want to choose these options, now I would think Woebot is just annoying and is not helpful. Yet, there is a particular issue that I found so annoyingly not right, which is the technology introduces itself saying that it is an emotional assistant and that it is not human, which it does voluntarily by itself. Then, later on it asks for my feelings and I say I don’t feel happy or sad, just in between. Then, it replies saying “I am feeling a little sluggish myself. I think it’s the weather,” which got me thinking this is so annoying do you think I was that stupid? how can you relate to the weather if you cannot live in it. This is just faulty information that would make the whole thing feel fake. This is especially because it was programed poorly, and one proof is that it will not tell me about their personality.