Retinal scans are most often seen in scientific fiction movies in some sort of high tech, confidential lab. This makes sense; to most people, a security form that relies on picking up intricacies of the human eye is very advanced and impressive. The idea behind retinal scans was first introduced much earlier than I anticipated, in 1935 by two doctors in New York. They understood the uniqueness of the outer layer of the eyeball, but could only dream of practical applications. The first implementations of their idea came 40 years later, when a man named Robert Hill created and patented the first retinal scanner. Today, they are used for a variety of applications beyond just confidential passwords. The scanning technology can be found in prisons, at ATMs, and even in doctors' offices due to the effect of some diseases on the retinal makeup.
The technology behind the retinal scanner is dependent on the individuality of each human's retina. Fortunately, even identical twins have different retinas. More specifically, it is the capillaries (tiny blood vessels) in the retina that set one structure apart from another. These capillaries have a different density than the tissue that surrounds them, which means that an infrared beam shined directly at the retina will be soaked in more by some regions than others. By setting up a lens to read the reflection of the infrared beam, the scanner can take in a map of the user's capillaries. They can then overlay this reading with the reading on file, and due to the fixed nature of the retina, find a match.
A map of retinal capillaries
Errors can occur with retinal scanners when considering medical issues. Diabetes, glaucoma, and other diseases can cause the pattern of capillaries within the retina to be altered, therefore rendering an initial reading of the retina useless. Another downside of retinal scanners is, as you might imagine, a very high cost. Even as the technology begins to spread, the price remains steep. However, the fact that retinal scanners are a reality outside of just the movie theater is an exciting step towards us all living a sci fi reality.
https://en.wikipedia.org/wiki/Retinal_scan
http://www.armedrobots.com/new-retinal-scanners-can-find-you-in-a-crowd-in-dim-light
http://www.oculist.net/others/ebook/generalophthal/server-java/arknoid/amed/vaughan/co_chapters/ch015/ch015_print_01.html
CMSC 150 - Fall 2016 - The World of Computer Science
Monday, November 28, 2016
Wednesday, November 16, 2016
Twenty Questions
This week, I will be writing about the game twenty questions. While I was originally familiarized with the game on family road trips that usually ended quickly because my sister and I didn't think of very complicated things, I later got a handheld video game of twenty questions. Even though I thought I would always be able to beat it, it consistently surprised me with its ability to guess the right answer, or something very close to it. Recently, someone mentioned the game to me and I realized that it is really just a long algorithm.
An example of the handheld game I used to have.
Each time the program asks the user a question, it is able to narrow down the vast number of options to a smaller subset. For example, a typical starting question is "person, place, or thing." Depending on the answer, the algorithm can then rule out all of the other two subsets. By continually asking questions, this process is compounded so that by the end of the 20 questions, there are only a few reasonable options.
In order for this program to function, the game must be pre-programmed with descriptions of thousands of possible answers. It must also be pre-programmed with hundreds of possible questions for the game to ask, and corresponding categories for an answer. The technical term for this is a binary search algorithm, which is described on Wikipedia as "comparing the target value to the middle element of the array; if they are unequal, the half in which the target cannot lie is eliminated and the search continues on the remaining half until it is successful." For twenty questions, this means that it eliminates the other subsets of answers and continues to search through the correct subset. What seems like a simple game in twenty questions becomes much more complex when programming ideals are applied.
https://en.wikipedia.org/wiki/Twenty_Questions#Computers.2C_scientific_method_and_situation_puzzles
https://en.wikipedia.org/wiki/Binary_search_algorithm
https://www.amazon.com/Radica-20Q-Artificial-Intelligence-Game/dp/B0001NE2AK
An example of the handheld game I used to have.
Each time the program asks the user a question, it is able to narrow down the vast number of options to a smaller subset. For example, a typical starting question is "person, place, or thing." Depending on the answer, the algorithm can then rule out all of the other two subsets. By continually asking questions, this process is compounded so that by the end of the 20 questions, there are only a few reasonable options.
In order for this program to function, the game must be pre-programmed with descriptions of thousands of possible answers. It must also be pre-programmed with hundreds of possible questions for the game to ask, and corresponding categories for an answer. The technical term for this is a binary search algorithm, which is described on Wikipedia as "comparing the target value to the middle element of the array; if they are unequal, the half in which the target cannot lie is eliminated and the search continues on the remaining half until it is successful." For twenty questions, this means that it eliminates the other subsets of answers and continues to search through the correct subset. What seems like a simple game in twenty questions becomes much more complex when programming ideals are applied.
https://en.wikipedia.org/wiki/Twenty_Questions#Computers.2C_scientific_method_and_situation_puzzles
https://en.wikipedia.org/wiki/Binary_search_algorithm
https://www.amazon.com/Radica-20Q-Artificial-Intelligence-Game/dp/B0001NE2AK
Friday, November 11, 2016
Autocorrect
In class this week, I believe on Tuesday, Dr. Jory mentioned something about how all autocorrect programs (such as in Microsoft Word, smartphones, etc.) can be simplified down to just primitive data types available in Java. An autocorrect function works such that it has a base way of functioning based off of simple word frequencies. In the example of smartphones, they are hard-programmed to assume certain words that are common in texting are what the user intends to input. If a user begins to respond to a text with the character "O" and follows it up with the character "l," the program will assume they meant to type "Ok." However, this is not only due to the frequency of the word "Ok" in texting.
These smartphones are also made to understand that physical human error is natural when using a smartphone. In the "Ok" example, the computer knows the "k" and "l" keys are located very close to each other. There is likely a system to store in the computer the distances of one key to another. If, on the contrary, the user has pressed the "e" key after the "O," the computer would make the more likely assumption that they meant to press "r" based off of how close together "e" and "r" are located.
Finally, one element of autocorrect that I find especially impressive is its ability to adapt over time. For me, having the last name "Brackenridge" can be a bear to type. Fortunately, my phone has "learned" over time to understand that if I begin to type my last name, the end of the name can be assumed. While many people complain about autocorrect when it messes up or post its funny errors or manipulations that can be forced through, at the end of the day, I think we can all agree that autocorrect is much more helpful than it is harmful.
http://www.howtogeek.com/222769/how-to-tame-and-improve-the-iphones-autocorrect-feature/
These smartphones are also made to understand that physical human error is natural when using a smartphone. In the "Ok" example, the computer knows the "k" and "l" keys are located very close to each other. There is likely a system to store in the computer the distances of one key to another. If, on the contrary, the user has pressed the "e" key after the "O," the computer would make the more likely assumption that they meant to press "r" based off of how close together "e" and "r" are located.
Finally, one element of autocorrect that I find especially impressive is its ability to adapt over time. For me, having the last name "Brackenridge" can be a bear to type. Fortunately, my phone has "learned" over time to understand that if I begin to type my last name, the end of the name can be assumed. While many people complain about autocorrect when it messes up or post its funny errors or manipulations that can be forced through, at the end of the day, I think we can all agree that autocorrect is much more helpful than it is harmful.
http://www.howtogeek.com/222769/how-to-tame-and-improve-the-iphones-autocorrect-feature/
Friday, November 4, 2016
Route Finding
This week, I will be discussing the algorithms that go into route finding. As someone that has always been very interested in maps in general, the application of route finding to a problem such as finding the fastest way to get from point A to point B is very appealing. In fact, one of the things that bothers me more than it should is that throughout the Richmond campus, there are many routes that seem to be equally fast, so you can never be sure that you're going the best way.
In a programming world, route finding becomes simpler when the routes are reduced to a more regimented, grid-related structure. Take this example from Khan academy, for instance:
In this picture, a maze has a goal (the red square) and the yellow circle, which began the maze at the square labeled 8 on the bottom of the maze (it is a simulation and I couldn't take the picture in time). When a user inputs the goal square, the program then maps backwards from that square. The square directly to the right of the red square is one unit away, then above and below that square are two units away, and so on. It has to take this incremental approach because a direct calculation of the distance between the circle and the goal would ignore the gray walls of the maze. The end result is that the program reaches the yellow circle from the red square more quickly by going "down" than it would have by going "up," so that is the path the circle takes.
In the context of maps, route finding just becomes more complex and difficult to calculate, but follows the same principles. Using this screenshot of the Richmond campus, we can trace out a route from Sarah Brunet Hall to the target similarly.
From the target, the gray circle at the bottom, a program could trace .01 mile increments. It would likely reach the fork in the road in one or two increments, then continue to add the same increment around the loop. It would also explore options such as Ryland Circle, but this would obviously be irrelevant to the end goal. Letting Google Maps do the calculations, we find that the fastest route is:
https://www.khanacademy.org/computing/computer-science/algorithms/intro-to-algorithms/a/route-finding
https://www.google.com/maps/@37.5774941,-77.5373498,16.76z
In a programming world, route finding becomes simpler when the routes are reduced to a more regimented, grid-related structure. Take this example from Khan academy, for instance:
In this picture, a maze has a goal (the red square) and the yellow circle, which began the maze at the square labeled 8 on the bottom of the maze (it is a simulation and I couldn't take the picture in time). When a user inputs the goal square, the program then maps backwards from that square. The square directly to the right of the red square is one unit away, then above and below that square are two units away, and so on. It has to take this incremental approach because a direct calculation of the distance between the circle and the goal would ignore the gray walls of the maze. The end result is that the program reaches the yellow circle from the red square more quickly by going "down" than it would have by going "up," so that is the path the circle takes.
In the context of maps, route finding just becomes more complex and difficult to calculate, but follows the same principles. Using this screenshot of the Richmond campus, we can trace out a route from Sarah Brunet Hall to the target similarly.
From the target, the gray circle at the bottom, a program could trace .01 mile increments. It would likely reach the fork in the road in one or two increments, then continue to add the same increment around the loop. It would also explore options such as Ryland Circle, but this would obviously be irrelevant to the end goal. Letting Google Maps do the calculations, we find that the fastest route is:
https://www.khanacademy.org/computing/computer-science/algorithms/intro-to-algorithms/a/route-finding
https://www.google.com/maps/@37.5774941,-77.5373498,16.76z
Friday, October 28, 2016
The Pilot Earpierce
This week, I'll be writing about a fairly futuristic piece of new technology. When I first heard of the Pilot earpiece, I was reminded of a science fiction book I read in elementary school in which aliens had chips that could be put on a tongue to translate languages instantly. The Pilot is not far off, as the earpieces operate similarly, albeit not quite as smoothly. Still, as a real world application of such an unbelievable idea, the earbuds are impressive. A sample video is embedded at the bottom of the blog.
The Pilot earpiece, which fits like an earbud.
The earpieces work by essentially combining speech recognition software and a speaking output function. Similarly to Apple's Siri, the earpiece identifies a voice and what it is saying as input. Then, it gets to work translating the input into the language of choice. This part is done with a bluetooth connection to a smartphone app. Once it has been successfully translated, the piece plays the sentence in the new language directly into the user's ear. On its surface, nothing about the device seems overly complicated as it is a fairly simple input/output relationship. The real challenge comes in achieving accuracy and capability when you consider not only how many languages there are, but also accents, idioms, and lack of clarity that occurs in standard conversations.
The Pilot is currently being crowdfunded, with potential buyers reserving pairs until the company has enough to begin mass production (likely in the spring). While the first version of the product will likely have its lags and hiccups, the potential of the earbuds in the future is exciting.
http://www.waverlylabs.com/#_overview
http://thenextweb.com/gadgets/2016/05/17/pilot-translates-just-like-the-babel-fish/
The Pilot earpiece, which fits like an earbud.
The earpieces work by essentially combining speech recognition software and a speaking output function. Similarly to Apple's Siri, the earpiece identifies a voice and what it is saying as input. Then, it gets to work translating the input into the language of choice. This part is done with a bluetooth connection to a smartphone app. Once it has been successfully translated, the piece plays the sentence in the new language directly into the user's ear. On its surface, nothing about the device seems overly complicated as it is a fairly simple input/output relationship. The real challenge comes in achieving accuracy and capability when you consider not only how many languages there are, but also accents, idioms, and lack of clarity that occurs in standard conversations.
The Pilot is currently being crowdfunded, with potential buyers reserving pairs until the company has enough to begin mass production (likely in the spring). While the first version of the product will likely have its lags and hiccups, the potential of the earbuds in the future is exciting.
http://www.waverlylabs.com/#_overview
http://thenextweb.com/gadgets/2016/05/17/pilot-translates-just-like-the-babel-fish/
Friday, October 21, 2016
The Failure of my Subaru's Programming
I was struggling to come up with a topic to write a journal entry about when I realized that for most of this week, I have been dealing with a programming-related issue. My car on campus, a 2010 Subaru, has had more than its share of issues throughout its life.
A similar "powder blue" model.
Most recently, it has started to simultaneously flash a battery light, skid warning light, electronic brake light, spin the speedometer needle back to zero, and flicker the headlights.
Some of the lights that will occasionally flash on my dashboard.
Even with my limited knowledge of cars, I made the assumption that there was an issue with either the battery or the alternator, both of which I have had issues with in the past. When I took it in to the shop today, though, the mechanics found nothing wrong with either of those and failed to diagnose the problem. After some investigation online, I came across what could be the issue: rusted or loosened connections between the internal wirings of the car. Because of this disconnect, the car's central processing unit is receiving mixed messages. There is likely an internal voltmeter connected to the battery that reads the level of the battery continuously. With the poor connection, however, this reading is only occasionally being transmitted. Thus, when the connections are jolted loose, by perhaps a hard acceleration or turn, the central processing unit does not receive the input from the voltmeter and reads the voltage of the battery as 0.
There are likely similar signal transmission errors relating to the speedometer (causing it to assume the car is not moving), connection to the electronic break (it may be programmed with a statement such as if the electronic break is not engaged, do not display the light, so no input would turn the light on), and skid sensors, although I am not sure how this lapse in the internal algorithm causes the headlights to flicker. Unfortunately, I do not have even close to the proper understanding of what goes on under the hood of a car to fix these issues, so for now, I will continue to monitor the situation and hope for the best.
http://www.cars101.com/subaru/legacy/legacy2014photos2.html
http://www.cars101.com/subaru/outback/outback2010.html
A similar "powder blue" model.
Most recently, it has started to simultaneously flash a battery light, skid warning light, electronic brake light, spin the speedometer needle back to zero, and flicker the headlights.
Some of the lights that will occasionally flash on my dashboard.
Even with my limited knowledge of cars, I made the assumption that there was an issue with either the battery or the alternator, both of which I have had issues with in the past. When I took it in to the shop today, though, the mechanics found nothing wrong with either of those and failed to diagnose the problem. After some investigation online, I came across what could be the issue: rusted or loosened connections between the internal wirings of the car. Because of this disconnect, the car's central processing unit is receiving mixed messages. There is likely an internal voltmeter connected to the battery that reads the level of the battery continuously. With the poor connection, however, this reading is only occasionally being transmitted. Thus, when the connections are jolted loose, by perhaps a hard acceleration or turn, the central processing unit does not receive the input from the voltmeter and reads the voltage of the battery as 0.
There are likely similar signal transmission errors relating to the speedometer (causing it to assume the car is not moving), connection to the electronic break (it may be programmed with a statement such as if the electronic break is not engaged, do not display the light, so no input would turn the light on), and skid sensors, although I am not sure how this lapse in the internal algorithm causes the headlights to flicker. Unfortunately, I do not have even close to the proper understanding of what goes on under the hood of a car to fix these issues, so for now, I will continue to monitor the situation and hope for the best.
http://www.cars101.com/subaru/legacy/legacy2014photos2.html
http://www.cars101.com/subaru/outback/outback2010.html
Friday, September 30, 2016
Combat Drones
Drones have received a lot of attention in the American media recently for a variety of reasons. For the avid Instagram user, they have become an exotic enhancement to their picture taking arsenal. For many others, they are much more significant as a new dimension of modern warfare. In warfare, two types of drones are typical. One is simpler while the other is more complex, and from a computer science standpoint, more intriguing.
A diagram depicting the relationship between the operator and the drone.
At their core, most UAVs (unmanned aerial vehicles) are quite similar to a remote control car with which kids might play. The engine of the vehicle is controlled entirely by inputs from a separate remote control. From this remote control, a user can have the UAV change its speed or direction, use it for reconnaissance purposes such as photography, or engage the drone in physical warfare. Historically, UAVs were practically implemented first by the Israeli army in the 1970s, then the Iranian army in the 1980s, and subsequently the U.S. army in the Gulf War in the 1990s. At that point, UAVs were used for reconnaissance purposes or as decoys. The first kill by a drone occurred in October of 2001, and since then, have occurred with increasing frequency. This has been a controversial tactic employed by the American armed forces.
An unarmed aerial vehicle firing a rocket.
Drones can have varying levels of autonomy, however. Many perform almost all functions under the guide of an operator, but have the ability to perform a function such as "return to base" by themselves. Others have increased capabilities, relying on receptors of the world around them to inform them of how to act. The drones are able to operate in this way by relying on a large amount of loops, from algorithms calculating the most efficient way to travel in regards to fuel and time to constructing an actual trajectory to travel from one location to another. Fully autonomous UAVs are said to be entirely cognizant of their surroundings and capable of total independence in terms of decision making. While fascinating from a programming standpoint, the political and ethical side of autonomous drone warfare has limited its implementation. Also, increased fear of malfunction or hacking has left decision makers wary of releasing autonomous UAVs in full force. The capabilities of these machines are nonetheless astounding and will certainly factor into the engineering landscape of the future.
https://en.wikipedia.org/wiki/Unmanned_aerial_vehicle#/media/File:Autonomous_control_basics.jpg
https://en.wikipedia.org/wiki/Unmanned_combat_aerial_vehicle
https://en.wikipedia.org/wiki/Unmanned_aerial_vehicle
http://www.globalexchange.org/blogs/peopletopeople/tag/drone-warfare/
A diagram depicting the relationship between the operator and the drone.
At their core, most UAVs (unmanned aerial vehicles) are quite similar to a remote control car with which kids might play. The engine of the vehicle is controlled entirely by inputs from a separate remote control. From this remote control, a user can have the UAV change its speed or direction, use it for reconnaissance purposes such as photography, or engage the drone in physical warfare. Historically, UAVs were practically implemented first by the Israeli army in the 1970s, then the Iranian army in the 1980s, and subsequently the U.S. army in the Gulf War in the 1990s. At that point, UAVs were used for reconnaissance purposes or as decoys. The first kill by a drone occurred in October of 2001, and since then, have occurred with increasing frequency. This has been a controversial tactic employed by the American armed forces.
An unarmed aerial vehicle firing a rocket.
Drones can have varying levels of autonomy, however. Many perform almost all functions under the guide of an operator, but have the ability to perform a function such as "return to base" by themselves. Others have increased capabilities, relying on receptors of the world around them to inform them of how to act. The drones are able to operate in this way by relying on a large amount of loops, from algorithms calculating the most efficient way to travel in regards to fuel and time to constructing an actual trajectory to travel from one location to another. Fully autonomous UAVs are said to be entirely cognizant of their surroundings and capable of total independence in terms of decision making. While fascinating from a programming standpoint, the political and ethical side of autonomous drone warfare has limited its implementation. Also, increased fear of malfunction or hacking has left decision makers wary of releasing autonomous UAVs in full force. The capabilities of these machines are nonetheless astounding and will certainly factor into the engineering landscape of the future.
https://en.wikipedia.org/wiki/Unmanned_aerial_vehicle#/media/File:Autonomous_control_basics.jpg
https://en.wikipedia.org/wiki/Unmanned_combat_aerial_vehicle
https://en.wikipedia.org/wiki/Unmanned_aerial_vehicle
http://www.globalexchange.org/blogs/peopletopeople/tag/drone-warfare/
Subscribe to:
Posts (Atom)