Retinal scans are most often seen in scientific fiction movies in some sort of high tech, confidential lab. This makes sense; to most people, a security form that relies on picking up intricacies of the human eye is very advanced and impressive. The idea behind retinal scans was first introduced much earlier than I anticipated, in 1935 by two doctors in New York. They understood the uniqueness of the outer layer of the eyeball, but could only dream of practical applications. The first implementations of their idea came 40 years later, when a man named Robert Hill created and patented the first retinal scanner. Today, they are used for a variety of applications beyond just confidential passwords. The scanning technology can be found in prisons, at ATMs, and even in doctors' offices due to the effect of some diseases on the retinal makeup.
The technology behind the retinal scanner is dependent on the individuality of each human's retina. Fortunately, even identical twins have different retinas. More specifically, it is the capillaries (tiny blood vessels) in the retina that set one structure apart from another. These capillaries have a different density than the tissue that surrounds them, which means that an infrared beam shined directly at the retina will be soaked in more by some regions than others. By setting up a lens to read the reflection of the infrared beam, the scanner can take in a map of the user's capillaries. They can then overlay this reading with the reading on file, and due to the fixed nature of the retina, find a match.
A map of retinal capillaries
Errors can occur with retinal scanners when considering medical issues. Diabetes, glaucoma, and other diseases can cause the pattern of capillaries within the retina to be altered, therefore rendering an initial reading of the retina useless. Another downside of retinal scanners is, as you might imagine, a very high cost. Even as the technology begins to spread, the price remains steep. However, the fact that retinal scanners are a reality outside of just the movie theater is an exciting step towards us all living a sci fi reality.
https://en.wikipedia.org/wiki/Retinal_scan
http://www.armedrobots.com/new-retinal-scanners-can-find-you-in-a-crowd-in-dim-light
http://www.oculist.net/others/ebook/generalophthal/server-java/arknoid/amed/vaughan/co_chapters/ch015/ch015_print_01.html
Monday, November 28, 2016
Wednesday, November 16, 2016
Twenty Questions
This week, I will be writing about the game twenty questions. While I was originally familiarized with the game on family road trips that usually ended quickly because my sister and I didn't think of very complicated things, I later got a handheld video game of twenty questions. Even though I thought I would always be able to beat it, it consistently surprised me with its ability to guess the right answer, or something very close to it. Recently, someone mentioned the game to me and I realized that it is really just a long algorithm.
An example of the handheld game I used to have.
Each time the program asks the user a question, it is able to narrow down the vast number of options to a smaller subset. For example, a typical starting question is "person, place, or thing." Depending on the answer, the algorithm can then rule out all of the other two subsets. By continually asking questions, this process is compounded so that by the end of the 20 questions, there are only a few reasonable options.
In order for this program to function, the game must be pre-programmed with descriptions of thousands of possible answers. It must also be pre-programmed with hundreds of possible questions for the game to ask, and corresponding categories for an answer. The technical term for this is a binary search algorithm, which is described on Wikipedia as "comparing the target value to the middle element of the array; if they are unequal, the half in which the target cannot lie is eliminated and the search continues on the remaining half until it is successful." For twenty questions, this means that it eliminates the other subsets of answers and continues to search through the correct subset. What seems like a simple game in twenty questions becomes much more complex when programming ideals are applied.
https://en.wikipedia.org/wiki/Twenty_Questions#Computers.2C_scientific_method_and_situation_puzzles
https://en.wikipedia.org/wiki/Binary_search_algorithm
https://www.amazon.com/Radica-20Q-Artificial-Intelligence-Game/dp/B0001NE2AK
An example of the handheld game I used to have.
Each time the program asks the user a question, it is able to narrow down the vast number of options to a smaller subset. For example, a typical starting question is "person, place, or thing." Depending on the answer, the algorithm can then rule out all of the other two subsets. By continually asking questions, this process is compounded so that by the end of the 20 questions, there are only a few reasonable options.
In order for this program to function, the game must be pre-programmed with descriptions of thousands of possible answers. It must also be pre-programmed with hundreds of possible questions for the game to ask, and corresponding categories for an answer. The technical term for this is a binary search algorithm, which is described on Wikipedia as "comparing the target value to the middle element of the array; if they are unequal, the half in which the target cannot lie is eliminated and the search continues on the remaining half until it is successful." For twenty questions, this means that it eliminates the other subsets of answers and continues to search through the correct subset. What seems like a simple game in twenty questions becomes much more complex when programming ideals are applied.
https://en.wikipedia.org/wiki/Twenty_Questions#Computers.2C_scientific_method_and_situation_puzzles
https://en.wikipedia.org/wiki/Binary_search_algorithm
https://www.amazon.com/Radica-20Q-Artificial-Intelligence-Game/dp/B0001NE2AK
Friday, November 11, 2016
Autocorrect
In class this week, I believe on Tuesday, Dr. Jory mentioned something about how all autocorrect programs (such as in Microsoft Word, smartphones, etc.) can be simplified down to just primitive data types available in Java. An autocorrect function works such that it has a base way of functioning based off of simple word frequencies. In the example of smartphones, they are hard-programmed to assume certain words that are common in texting are what the user intends to input. If a user begins to respond to a text with the character "O" and follows it up with the character "l," the program will assume they meant to type "Ok." However, this is not only due to the frequency of the word "Ok" in texting.
These smartphones are also made to understand that physical human error is natural when using a smartphone. In the "Ok" example, the computer knows the "k" and "l" keys are located very close to each other. There is likely a system to store in the computer the distances of one key to another. If, on the contrary, the user has pressed the "e" key after the "O," the computer would make the more likely assumption that they meant to press "r" based off of how close together "e" and "r" are located.
Finally, one element of autocorrect that I find especially impressive is its ability to adapt over time. For me, having the last name "Brackenridge" can be a bear to type. Fortunately, my phone has "learned" over time to understand that if I begin to type my last name, the end of the name can be assumed. While many people complain about autocorrect when it messes up or post its funny errors or manipulations that can be forced through, at the end of the day, I think we can all agree that autocorrect is much more helpful than it is harmful.
http://www.howtogeek.com/222769/how-to-tame-and-improve-the-iphones-autocorrect-feature/
These smartphones are also made to understand that physical human error is natural when using a smartphone. In the "Ok" example, the computer knows the "k" and "l" keys are located very close to each other. There is likely a system to store in the computer the distances of one key to another. If, on the contrary, the user has pressed the "e" key after the "O," the computer would make the more likely assumption that they meant to press "r" based off of how close together "e" and "r" are located.
Finally, one element of autocorrect that I find especially impressive is its ability to adapt over time. For me, having the last name "Brackenridge" can be a bear to type. Fortunately, my phone has "learned" over time to understand that if I begin to type my last name, the end of the name can be assumed. While many people complain about autocorrect when it messes up or post its funny errors or manipulations that can be forced through, at the end of the day, I think we can all agree that autocorrect is much more helpful than it is harmful.
http://www.howtogeek.com/222769/how-to-tame-and-improve-the-iphones-autocorrect-feature/
Friday, November 4, 2016
Route Finding
This week, I will be discussing the algorithms that go into route finding. As someone that has always been very interested in maps in general, the application of route finding to a problem such as finding the fastest way to get from point A to point B is very appealing. In fact, one of the things that bothers me more than it should is that throughout the Richmond campus, there are many routes that seem to be equally fast, so you can never be sure that you're going the best way.
In a programming world, route finding becomes simpler when the routes are reduced to a more regimented, grid-related structure. Take this example from Khan academy, for instance:
In this picture, a maze has a goal (the red square) and the yellow circle, which began the maze at the square labeled 8 on the bottom of the maze (it is a simulation and I couldn't take the picture in time). When a user inputs the goal square, the program then maps backwards from that square. The square directly to the right of the red square is one unit away, then above and below that square are two units away, and so on. It has to take this incremental approach because a direct calculation of the distance between the circle and the goal would ignore the gray walls of the maze. The end result is that the program reaches the yellow circle from the red square more quickly by going "down" than it would have by going "up," so that is the path the circle takes.
In the context of maps, route finding just becomes more complex and difficult to calculate, but follows the same principles. Using this screenshot of the Richmond campus, we can trace out a route from Sarah Brunet Hall to the target similarly.
From the target, the gray circle at the bottom, a program could trace .01 mile increments. It would likely reach the fork in the road in one or two increments, then continue to add the same increment around the loop. It would also explore options such as Ryland Circle, but this would obviously be irrelevant to the end goal. Letting Google Maps do the calculations, we find that the fastest route is:
https://www.khanacademy.org/computing/computer-science/algorithms/intro-to-algorithms/a/route-finding
https://www.google.com/maps/@37.5774941,-77.5373498,16.76z
In a programming world, route finding becomes simpler when the routes are reduced to a more regimented, grid-related structure. Take this example from Khan academy, for instance:
In this picture, a maze has a goal (the red square) and the yellow circle, which began the maze at the square labeled 8 on the bottom of the maze (it is a simulation and I couldn't take the picture in time). When a user inputs the goal square, the program then maps backwards from that square. The square directly to the right of the red square is one unit away, then above and below that square are two units away, and so on. It has to take this incremental approach because a direct calculation of the distance between the circle and the goal would ignore the gray walls of the maze. The end result is that the program reaches the yellow circle from the red square more quickly by going "down" than it would have by going "up," so that is the path the circle takes.
In the context of maps, route finding just becomes more complex and difficult to calculate, but follows the same principles. Using this screenshot of the Richmond campus, we can trace out a route from Sarah Brunet Hall to the target similarly.
From the target, the gray circle at the bottom, a program could trace .01 mile increments. It would likely reach the fork in the road in one or two increments, then continue to add the same increment around the loop. It would also explore options such as Ryland Circle, but this would obviously be irrelevant to the end goal. Letting Google Maps do the calculations, we find that the fastest route is:
https://www.khanacademy.org/computing/computer-science/algorithms/intro-to-algorithms/a/route-finding
https://www.google.com/maps/@37.5774941,-77.5373498,16.76z
Subscribe to:
Posts (Atom)