Retinal scans are most often seen in scientific fiction movies in some sort of high tech, confidential lab. This makes sense; to most people, a security form that relies on picking up intricacies of the human eye is very advanced and impressive. The idea behind retinal scans was first introduced much earlier than I anticipated, in 1935 by two doctors in New York. They understood the uniqueness of the outer layer of the eyeball, but could only dream of practical applications. The first implementations of their idea came 40 years later, when a man named Robert Hill created and patented the first retinal scanner. Today, they are used for a variety of applications beyond just confidential passwords. The scanning technology can be found in prisons, at ATMs, and even in doctors' offices due to the effect of some diseases on the retinal makeup.
The technology behind the retinal scanner is dependent on the individuality of each human's retina. Fortunately, even identical twins have different retinas. More specifically, it is the capillaries (tiny blood vessels) in the retina that set one structure apart from another. These capillaries have a different density than the tissue that surrounds them, which means that an infrared beam shined directly at the retina will be soaked in more by some regions than others. By setting up a lens to read the reflection of the infrared beam, the scanner can take in a map of the user's capillaries. They can then overlay this reading with the reading on file, and due to the fixed nature of the retina, find a match.
A map of retinal capillaries
Errors can occur with retinal scanners when considering medical issues. Diabetes, glaucoma, and other diseases can cause the pattern of capillaries within the retina to be altered, therefore rendering an initial reading of the retina useless. Another downside of retinal scanners is, as you might imagine, a very high cost. Even as the technology begins to spread, the price remains steep. However, the fact that retinal scanners are a reality outside of just the movie theater is an exciting step towards us all living a sci fi reality.
https://en.wikipedia.org/wiki/Retinal_scan
http://www.armedrobots.com/new-retinal-scanners-can-find-you-in-a-crowd-in-dim-light
http://www.oculist.net/others/ebook/generalophthal/server-java/arknoid/amed/vaughan/co_chapters/ch015/ch015_print_01.html
Monday, November 28, 2016
Wednesday, November 16, 2016
Twenty Questions
This week, I will be writing about the game twenty questions. While I was originally familiarized with the game on family road trips that usually ended quickly because my sister and I didn't think of very complicated things, I later got a handheld video game of twenty questions. Even though I thought I would always be able to beat it, it consistently surprised me with its ability to guess the right answer, or something very close to it. Recently, someone mentioned the game to me and I realized that it is really just a long algorithm.
An example of the handheld game I used to have.
Each time the program asks the user a question, it is able to narrow down the vast number of options to a smaller subset. For example, a typical starting question is "person, place, or thing." Depending on the answer, the algorithm can then rule out all of the other two subsets. By continually asking questions, this process is compounded so that by the end of the 20 questions, there are only a few reasonable options.
In order for this program to function, the game must be pre-programmed with descriptions of thousands of possible answers. It must also be pre-programmed with hundreds of possible questions for the game to ask, and corresponding categories for an answer. The technical term for this is a binary search algorithm, which is described on Wikipedia as "comparing the target value to the middle element of the array; if they are unequal, the half in which the target cannot lie is eliminated and the search continues on the remaining half until it is successful." For twenty questions, this means that it eliminates the other subsets of answers and continues to search through the correct subset. What seems like a simple game in twenty questions becomes much more complex when programming ideals are applied.
https://en.wikipedia.org/wiki/Twenty_Questions#Computers.2C_scientific_method_and_situation_puzzles
https://en.wikipedia.org/wiki/Binary_search_algorithm
https://www.amazon.com/Radica-20Q-Artificial-Intelligence-Game/dp/B0001NE2AK
An example of the handheld game I used to have.
Each time the program asks the user a question, it is able to narrow down the vast number of options to a smaller subset. For example, a typical starting question is "person, place, or thing." Depending on the answer, the algorithm can then rule out all of the other two subsets. By continually asking questions, this process is compounded so that by the end of the 20 questions, there are only a few reasonable options.
In order for this program to function, the game must be pre-programmed with descriptions of thousands of possible answers. It must also be pre-programmed with hundreds of possible questions for the game to ask, and corresponding categories for an answer. The technical term for this is a binary search algorithm, which is described on Wikipedia as "comparing the target value to the middle element of the array; if they are unequal, the half in which the target cannot lie is eliminated and the search continues on the remaining half until it is successful." For twenty questions, this means that it eliminates the other subsets of answers and continues to search through the correct subset. What seems like a simple game in twenty questions becomes much more complex when programming ideals are applied.
https://en.wikipedia.org/wiki/Twenty_Questions#Computers.2C_scientific_method_and_situation_puzzles
https://en.wikipedia.org/wiki/Binary_search_algorithm
https://www.amazon.com/Radica-20Q-Artificial-Intelligence-Game/dp/B0001NE2AK
Friday, November 11, 2016
Autocorrect
In class this week, I believe on Tuesday, Dr. Jory mentioned something about how all autocorrect programs (such as in Microsoft Word, smartphones, etc.) can be simplified down to just primitive data types available in Java. An autocorrect function works such that it has a base way of functioning based off of simple word frequencies. In the example of smartphones, they are hard-programmed to assume certain words that are common in texting are what the user intends to input. If a user begins to respond to a text with the character "O" and follows it up with the character "l," the program will assume they meant to type "Ok." However, this is not only due to the frequency of the word "Ok" in texting.
These smartphones are also made to understand that physical human error is natural when using a smartphone. In the "Ok" example, the computer knows the "k" and "l" keys are located very close to each other. There is likely a system to store in the computer the distances of one key to another. If, on the contrary, the user has pressed the "e" key after the "O," the computer would make the more likely assumption that they meant to press "r" based off of how close together "e" and "r" are located.
Finally, one element of autocorrect that I find especially impressive is its ability to adapt over time. For me, having the last name "Brackenridge" can be a bear to type. Fortunately, my phone has "learned" over time to understand that if I begin to type my last name, the end of the name can be assumed. While many people complain about autocorrect when it messes up or post its funny errors or manipulations that can be forced through, at the end of the day, I think we can all agree that autocorrect is much more helpful than it is harmful.
http://www.howtogeek.com/222769/how-to-tame-and-improve-the-iphones-autocorrect-feature/
These smartphones are also made to understand that physical human error is natural when using a smartphone. In the "Ok" example, the computer knows the "k" and "l" keys are located very close to each other. There is likely a system to store in the computer the distances of one key to another. If, on the contrary, the user has pressed the "e" key after the "O," the computer would make the more likely assumption that they meant to press "r" based off of how close together "e" and "r" are located.
Finally, one element of autocorrect that I find especially impressive is its ability to adapt over time. For me, having the last name "Brackenridge" can be a bear to type. Fortunately, my phone has "learned" over time to understand that if I begin to type my last name, the end of the name can be assumed. While many people complain about autocorrect when it messes up or post its funny errors or manipulations that can be forced through, at the end of the day, I think we can all agree that autocorrect is much more helpful than it is harmful.
http://www.howtogeek.com/222769/how-to-tame-and-improve-the-iphones-autocorrect-feature/
Friday, November 4, 2016
Route Finding
This week, I will be discussing the algorithms that go into route finding. As someone that has always been very interested in maps in general, the application of route finding to a problem such as finding the fastest way to get from point A to point B is very appealing. In fact, one of the things that bothers me more than it should is that throughout the Richmond campus, there are many routes that seem to be equally fast, so you can never be sure that you're going the best way.
In a programming world, route finding becomes simpler when the routes are reduced to a more regimented, grid-related structure. Take this example from Khan academy, for instance:
In this picture, a maze has a goal (the red square) and the yellow circle, which began the maze at the square labeled 8 on the bottom of the maze (it is a simulation and I couldn't take the picture in time). When a user inputs the goal square, the program then maps backwards from that square. The square directly to the right of the red square is one unit away, then above and below that square are two units away, and so on. It has to take this incremental approach because a direct calculation of the distance between the circle and the goal would ignore the gray walls of the maze. The end result is that the program reaches the yellow circle from the red square more quickly by going "down" than it would have by going "up," so that is the path the circle takes.
In the context of maps, route finding just becomes more complex and difficult to calculate, but follows the same principles. Using this screenshot of the Richmond campus, we can trace out a route from Sarah Brunet Hall to the target similarly.
From the target, the gray circle at the bottom, a program could trace .01 mile increments. It would likely reach the fork in the road in one or two increments, then continue to add the same increment around the loop. It would also explore options such as Ryland Circle, but this would obviously be irrelevant to the end goal. Letting Google Maps do the calculations, we find that the fastest route is:
https://www.khanacademy.org/computing/computer-science/algorithms/intro-to-algorithms/a/route-finding
https://www.google.com/maps/@37.5774941,-77.5373498,16.76z
In a programming world, route finding becomes simpler when the routes are reduced to a more regimented, grid-related structure. Take this example from Khan academy, for instance:
In this picture, a maze has a goal (the red square) and the yellow circle, which began the maze at the square labeled 8 on the bottom of the maze (it is a simulation and I couldn't take the picture in time). When a user inputs the goal square, the program then maps backwards from that square. The square directly to the right of the red square is one unit away, then above and below that square are two units away, and so on. It has to take this incremental approach because a direct calculation of the distance between the circle and the goal would ignore the gray walls of the maze. The end result is that the program reaches the yellow circle from the red square more quickly by going "down" than it would have by going "up," so that is the path the circle takes.
In the context of maps, route finding just becomes more complex and difficult to calculate, but follows the same principles. Using this screenshot of the Richmond campus, we can trace out a route from Sarah Brunet Hall to the target similarly.
From the target, the gray circle at the bottom, a program could trace .01 mile increments. It would likely reach the fork in the road in one or two increments, then continue to add the same increment around the loop. It would also explore options such as Ryland Circle, but this would obviously be irrelevant to the end goal. Letting Google Maps do the calculations, we find that the fastest route is:
https://www.khanacademy.org/computing/computer-science/algorithms/intro-to-algorithms/a/route-finding
https://www.google.com/maps/@37.5774941,-77.5373498,16.76z
Friday, October 28, 2016
The Pilot Earpierce
This week, I'll be writing about a fairly futuristic piece of new technology. When I first heard of the Pilot earpiece, I was reminded of a science fiction book I read in elementary school in which aliens had chips that could be put on a tongue to translate languages instantly. The Pilot is not far off, as the earpieces operate similarly, albeit not quite as smoothly. Still, as a real world application of such an unbelievable idea, the earbuds are impressive. A sample video is embedded at the bottom of the blog.
The Pilot earpiece, which fits like an earbud.
The earpieces work by essentially combining speech recognition software and a speaking output function. Similarly to Apple's Siri, the earpiece identifies a voice and what it is saying as input. Then, it gets to work translating the input into the language of choice. This part is done with a bluetooth connection to a smartphone app. Once it has been successfully translated, the piece plays the sentence in the new language directly into the user's ear. On its surface, nothing about the device seems overly complicated as it is a fairly simple input/output relationship. The real challenge comes in achieving accuracy and capability when you consider not only how many languages there are, but also accents, idioms, and lack of clarity that occurs in standard conversations.
The Pilot is currently being crowdfunded, with potential buyers reserving pairs until the company has enough to begin mass production (likely in the spring). While the first version of the product will likely have its lags and hiccups, the potential of the earbuds in the future is exciting.
http://www.waverlylabs.com/#_overview
http://thenextweb.com/gadgets/2016/05/17/pilot-translates-just-like-the-babel-fish/
The Pilot earpiece, which fits like an earbud.
The earpieces work by essentially combining speech recognition software and a speaking output function. Similarly to Apple's Siri, the earpiece identifies a voice and what it is saying as input. Then, it gets to work translating the input into the language of choice. This part is done with a bluetooth connection to a smartphone app. Once it has been successfully translated, the piece plays the sentence in the new language directly into the user's ear. On its surface, nothing about the device seems overly complicated as it is a fairly simple input/output relationship. The real challenge comes in achieving accuracy and capability when you consider not only how many languages there are, but also accents, idioms, and lack of clarity that occurs in standard conversations.
The Pilot is currently being crowdfunded, with potential buyers reserving pairs until the company has enough to begin mass production (likely in the spring). While the first version of the product will likely have its lags and hiccups, the potential of the earbuds in the future is exciting.
http://www.waverlylabs.com/#_overview
http://thenextweb.com/gadgets/2016/05/17/pilot-translates-just-like-the-babel-fish/
Friday, October 21, 2016
The Failure of my Subaru's Programming
I was struggling to come up with a topic to write a journal entry about when I realized that for most of this week, I have been dealing with a programming-related issue. My car on campus, a 2010 Subaru, has had more than its share of issues throughout its life.
A similar "powder blue" model.
Most recently, it has started to simultaneously flash a battery light, skid warning light, electronic brake light, spin the speedometer needle back to zero, and flicker the headlights.
Some of the lights that will occasionally flash on my dashboard.
Even with my limited knowledge of cars, I made the assumption that there was an issue with either the battery or the alternator, both of which I have had issues with in the past. When I took it in to the shop today, though, the mechanics found nothing wrong with either of those and failed to diagnose the problem. After some investigation online, I came across what could be the issue: rusted or loosened connections between the internal wirings of the car. Because of this disconnect, the car's central processing unit is receiving mixed messages. There is likely an internal voltmeter connected to the battery that reads the level of the battery continuously. With the poor connection, however, this reading is only occasionally being transmitted. Thus, when the connections are jolted loose, by perhaps a hard acceleration or turn, the central processing unit does not receive the input from the voltmeter and reads the voltage of the battery as 0.
There are likely similar signal transmission errors relating to the speedometer (causing it to assume the car is not moving), connection to the electronic break (it may be programmed with a statement such as if the electronic break is not engaged, do not display the light, so no input would turn the light on), and skid sensors, although I am not sure how this lapse in the internal algorithm causes the headlights to flicker. Unfortunately, I do not have even close to the proper understanding of what goes on under the hood of a car to fix these issues, so for now, I will continue to monitor the situation and hope for the best.
http://www.cars101.com/subaru/legacy/legacy2014photos2.html
http://www.cars101.com/subaru/outback/outback2010.html
A similar "powder blue" model.
Most recently, it has started to simultaneously flash a battery light, skid warning light, electronic brake light, spin the speedometer needle back to zero, and flicker the headlights.
Some of the lights that will occasionally flash on my dashboard.
Even with my limited knowledge of cars, I made the assumption that there was an issue with either the battery or the alternator, both of which I have had issues with in the past. When I took it in to the shop today, though, the mechanics found nothing wrong with either of those and failed to diagnose the problem. After some investigation online, I came across what could be the issue: rusted or loosened connections between the internal wirings of the car. Because of this disconnect, the car's central processing unit is receiving mixed messages. There is likely an internal voltmeter connected to the battery that reads the level of the battery continuously. With the poor connection, however, this reading is only occasionally being transmitted. Thus, when the connections are jolted loose, by perhaps a hard acceleration or turn, the central processing unit does not receive the input from the voltmeter and reads the voltage of the battery as 0.
There are likely similar signal transmission errors relating to the speedometer (causing it to assume the car is not moving), connection to the electronic break (it may be programmed with a statement such as if the electronic break is not engaged, do not display the light, so no input would turn the light on), and skid sensors, although I am not sure how this lapse in the internal algorithm causes the headlights to flicker. Unfortunately, I do not have even close to the proper understanding of what goes on under the hood of a car to fix these issues, so for now, I will continue to monitor the situation and hope for the best.
http://www.cars101.com/subaru/legacy/legacy2014photos2.html
http://www.cars101.com/subaru/outback/outback2010.html
Friday, September 30, 2016
Combat Drones
Drones have received a lot of attention in the American media recently for a variety of reasons. For the avid Instagram user, they have become an exotic enhancement to their picture taking arsenal. For many others, they are much more significant as a new dimension of modern warfare. In warfare, two types of drones are typical. One is simpler while the other is more complex, and from a computer science standpoint, more intriguing.
A diagram depicting the relationship between the operator and the drone.
At their core, most UAVs (unmanned aerial vehicles) are quite similar to a remote control car with which kids might play. The engine of the vehicle is controlled entirely by inputs from a separate remote control. From this remote control, a user can have the UAV change its speed or direction, use it for reconnaissance purposes such as photography, or engage the drone in physical warfare. Historically, UAVs were practically implemented first by the Israeli army in the 1970s, then the Iranian army in the 1980s, and subsequently the U.S. army in the Gulf War in the 1990s. At that point, UAVs were used for reconnaissance purposes or as decoys. The first kill by a drone occurred in October of 2001, and since then, have occurred with increasing frequency. This has been a controversial tactic employed by the American armed forces.
An unarmed aerial vehicle firing a rocket.
Drones can have varying levels of autonomy, however. Many perform almost all functions under the guide of an operator, but have the ability to perform a function such as "return to base" by themselves. Others have increased capabilities, relying on receptors of the world around them to inform them of how to act. The drones are able to operate in this way by relying on a large amount of loops, from algorithms calculating the most efficient way to travel in regards to fuel and time to constructing an actual trajectory to travel from one location to another. Fully autonomous UAVs are said to be entirely cognizant of their surroundings and capable of total independence in terms of decision making. While fascinating from a programming standpoint, the political and ethical side of autonomous drone warfare has limited its implementation. Also, increased fear of malfunction or hacking has left decision makers wary of releasing autonomous UAVs in full force. The capabilities of these machines are nonetheless astounding and will certainly factor into the engineering landscape of the future.
https://en.wikipedia.org/wiki/Unmanned_aerial_vehicle#/media/File:Autonomous_control_basics.jpg
https://en.wikipedia.org/wiki/Unmanned_combat_aerial_vehicle
https://en.wikipedia.org/wiki/Unmanned_aerial_vehicle
http://www.globalexchange.org/blogs/peopletopeople/tag/drone-warfare/
A diagram depicting the relationship between the operator and the drone.
At their core, most UAVs (unmanned aerial vehicles) are quite similar to a remote control car with which kids might play. The engine of the vehicle is controlled entirely by inputs from a separate remote control. From this remote control, a user can have the UAV change its speed or direction, use it for reconnaissance purposes such as photography, or engage the drone in physical warfare. Historically, UAVs were practically implemented first by the Israeli army in the 1970s, then the Iranian army in the 1980s, and subsequently the U.S. army in the Gulf War in the 1990s. At that point, UAVs were used for reconnaissance purposes or as decoys. The first kill by a drone occurred in October of 2001, and since then, have occurred with increasing frequency. This has been a controversial tactic employed by the American armed forces.
An unarmed aerial vehicle firing a rocket.
Drones can have varying levels of autonomy, however. Many perform almost all functions under the guide of an operator, but have the ability to perform a function such as "return to base" by themselves. Others have increased capabilities, relying on receptors of the world around them to inform them of how to act. The drones are able to operate in this way by relying on a large amount of loops, from algorithms calculating the most efficient way to travel in regards to fuel and time to constructing an actual trajectory to travel from one location to another. Fully autonomous UAVs are said to be entirely cognizant of their surroundings and capable of total independence in terms of decision making. While fascinating from a programming standpoint, the political and ethical side of autonomous drone warfare has limited its implementation. Also, increased fear of malfunction or hacking has left decision makers wary of releasing autonomous UAVs in full force. The capabilities of these machines are nonetheless astounding and will certainly factor into the engineering landscape of the future.
https://en.wikipedia.org/wiki/Unmanned_aerial_vehicle#/media/File:Autonomous_control_basics.jpg
https://en.wikipedia.org/wiki/Unmanned_combat_aerial_vehicle
https://en.wikipedia.org/wiki/Unmanned_aerial_vehicle
http://www.globalexchange.org/blogs/peopletopeople/tag/drone-warfare/
Friday, September 23, 2016
Alan Turing
While I initially learned of Alan Turing through the movie "The Imitation Game," I came to appreciate his influence on computer science once I started taking this class. By learning just the simple functions that go into programming, I came to understand how difficult it must be to create a programming language, let alone the very first one with no precedent off of which to work. Turing's original computer, used to break codes written by the Nazi army during World War II, ended up being the basis for far more developments in the world of computer science.
The wheels used to compute solutions on the Collosus computer.
Turing worked for the British army during World War II, developing the Collosus computer, which was used to break German codes. The Collosus is understood to be the first programmable computer. By studying a lapse in German coding, the British programmers, including Turing, were able to understand that they could use the equation ∆Z1 ⊕ ∆Z2 ⊕ ∆1 ⊕ ∆2 = • , counting the number of "false" interpretations returned, to decipher the code.
Turing was initially discredited for his efforts due to discriminatory homophobia. He was, however, later acknowledged by the British government for not only his contributions to World War II, but the field of computer science in general.
https://en.wikipedia.org/wiki/Alan_Turing#Early_computers_and_the_Turing_test
https://en.wikipedia.org/wiki/Colossus_computer
https://en.wikipedia.org/wiki/Colossus_computer#/media/File:SZ42-6-wheels-lightened.jpg
The wheels used to compute solutions on the Collosus computer.
Turing worked for the British army during World War II, developing the Collosus computer, which was used to break German codes. The Collosus is understood to be the first programmable computer. By studying a lapse in German coding, the British programmers, including Turing, were able to understand that they could use the equation ∆Z1 ⊕ ∆Z2 ⊕ ∆1 ⊕ ∆2 = • , counting the number of "false" interpretations returned, to decipher the code.
Turing was initially discredited for his efforts due to discriminatory homophobia. He was, however, later acknowledged by the British government for not only his contributions to World War II, but the field of computer science in general.
https://en.wikipedia.org/wiki/Alan_Turing#Early_computers_and_the_Turing_test
https://en.wikipedia.org/wiki/Colossus_computer
https://en.wikipedia.org/wiki/Colossus_computer#/media/File:SZ42-6-wheels-lightened.jpg
Friday, September 16, 2016
Magnetic Key Cards
Magnetic key cards are ubiquitous in the modern world, with their technology applied to everything from credit cards to hotel keys. Considering our University of Richmond IDs use this same technology for so many things we do on a daily basis, I decided to look into the technology behind key cards.
https://en.wikipedia.org/wiki/Magnetic_stripe_card
The black strip visible on most magnetic key cards is just the surface of a tiny metallic strip on which magnetically charged iron particles are arranged such that they store data. Due to the microscopic nature of these charged arrangements, there are billions of possible combinations. What this means practically is that each time we swipe our UR IDs, or credit cards, or anything else with a magnetic strip, the reader is analyzing the pattern of the strip. Then, it will use that information to determine what access should be granted, or in the case of credit cards, what account should be charged. It is impressive to think that just the action of swiping a card informs a machine of who you are and what following actions should be taken, but that is the ability of this technology.
A magnetic sequence with its own meaning.
Magnetic strips can be produced in two levels of coercivity, high and low. While low coercivity magstrips are less expensive to produce, they also have a much shorter lifespan and can be more easily erased, even by interaction with other magnetic items. These are typically only used for gift cards, season passes, and other such short term needs. Longer term cards, such as credit cards or debit cards, require high coercivity magstrips. These use more magnetic energy to encode, thus making them more difficult to erase. Interestingly, high coercivity strips show up as black on cards, while low coercivity strips tend to be light brown.
A key card being read by a reader.
Recently, common magnetic strip key cards appear to be getting phased out. Many more tech savvy companies and consumers are turning to electronic alternatives such as Apple Pay. This technology does essentially the same thing, reading a user's specifically encoded information in order to decide what to do, but with fewer pieces of hardware. Also, increased abilities from identity thieves and hackers have rendered common magstrips more dangerous. Many magstrip readers can be outfitted to send the information input to recipients beyond just the company receiving the payment. Thus, the trend seems to be to favor chip readers, which are more difficult to breach. Regardless, the magnetic key card has been a huge part of our lives as the world has become more technological.
https://en.wikipedia.org/wiki/Magnetic_stripe_card#/media/File:Aufnahme_der_magnetischen_Struktur_eines_Magnetstreifens_auf_eine_EC-Karte_(Aufnahme_mit_CMOS-MagView)2.jpg
http://www.moosekeycard.com/price.html
Thursday, September 8, 2016
3d Touch on the iPhone 7
Tech giant Apple recently released its latest iteration of the iPhone, its wildly popular smartphone brand. As expected, the company tweaked many parts of the phone's design, from its headphone jack to the interior processing units to the camera, and, as I will focus on in this journal entry, its home button.
Apple's 3d touch being used on the iPhone 6s.
Apple has used a technology that they call "3d touch" in the past on the touchscreen part of the phone. What this allows the user to do is push harder on the screen to unlock additional outputs than just selecting whatever may be on the screen at that time. Different amounts of pressure can trigger previews of an output, menu shortcuts, and quicker actions than usual. On the iPhone 7, however, Apple is applying this technology to its home "button." While this has previously been a physical button that can be engaged for a number of purposes, the newest edition is a button only in name. Instead of operating mechanically, the new button receives a user's pressure on the button not with a tangible movement, but with an internal response. A major benefit of this advancement appears to be increased durability as well as compatibility with Apple's desire to make the iPhone water resistant.
The iPhone 7 shown from the bottom.
Perhaps most interesting is that "Third-party companies will also be able to program their own feedback through a taptic engine API" (The Verge). This means that the myriad apps downloaded on iPhones will be incorporating another input option. From a programming standpoint, this gives app developers and Apple's own engineers many exciting new directions to explore by just increasing the variety of inputs. Overall, replacing a mechanical button with a 3d touch button demonstrates how Apple's development teams are overcoming obstacles introduced by oft-failed physically triggered home buttons. As they have with all of their product lines in the past, the tech giant is attempting to quell an issue while progressing their technology towards the future.
Sources:
http://appleapple.top/iphone-7-touchscreen-button-force-touch-id-will-accurately-simulate-the-usual-clicks/
http://www.slashgear.com/iphone-7-home-button-how-does-it-work-07455109/
http://appleinsider.com/articles/15/09/10/force-touch-gets-redefined-in-the-iphone-6s-with-3d-touch
http://www.theverge.com/circuitbreaker/2016/9/7/12828652/apple-iphone-7-home-button-removed-force-touch
http://www.businessinsider.com/apple-3d-touch-for-iphone-2015-9
Tech giant Apple recently released its latest iteration of the iPhone, its wildly popular smartphone brand. As expected, the company tweaked many parts of the phone's design, from its headphone jack to the interior processing units to the camera, and, as I will focus on in this journal entry, its home button.
Apple's 3d touch being used on the iPhone 6s.
Apple has used a technology that they call "3d touch" in the past on the touchscreen part of the phone. What this allows the user to do is push harder on the screen to unlock additional outputs than just selecting whatever may be on the screen at that time. Different amounts of pressure can trigger previews of an output, menu shortcuts, and quicker actions than usual. On the iPhone 7, however, Apple is applying this technology to its home "button." While this has previously been a physical button that can be engaged for a number of purposes, the newest edition is a button only in name. Instead of operating mechanically, the new button receives a user's pressure on the button not with a tangible movement, but with an internal response. A major benefit of this advancement appears to be increased durability as well as compatibility with Apple's desire to make the iPhone water resistant.
The iPhone 7 shown from the bottom.
Perhaps most interesting is that "Third-party companies will also be able to program their own feedback through a taptic engine API" (The Verge). This means that the myriad apps downloaded on iPhones will be incorporating another input option. From a programming standpoint, this gives app developers and Apple's own engineers many exciting new directions to explore by just increasing the variety of inputs. Overall, replacing a mechanical button with a 3d touch button demonstrates how Apple's development teams are overcoming obstacles introduced by oft-failed physically triggered home buttons. As they have with all of their product lines in the past, the tech giant is attempting to quell an issue while progressing their technology towards the future.
Sources:
http://appleapple.top/iphone-7-touchscreen-button-force-touch-id-will-accurately-simulate-the-usual-clicks/
http://www.slashgear.com/iphone-7-home-button-how-does-it-work-07455109/
http://appleinsider.com/articles/15/09/10/force-touch-gets-redefined-in-the-iphone-6s-with-3d-touch
http://www.theverge.com/circuitbreaker/2016/9/7/12828652/apple-iphone-7-home-button-removed-force-touch
http://www.businessinsider.com/apple-3d-touch-for-iphone-2015-9
Thursday, September 1, 2016
CGI in Filming
I was inspired to write about computer-generated imagery, or CGI, after watching the following video (definitely worth the watch if you are a fan) on the Game of Thrones episode "Battle of the Bastards." It focuses mainly on how producers could augment the scenes to drastically increase the scope of the shot from a few dozen horses and actors to full scale armies.
https://vimeo.com/172374044
CGI is usually used to create 3D images, although it can also be applied in 2D formats. I was surprised to see that even the backgrounds of many scenes shot with CGI that I anticipated would be authentic places were actually created with algorithms. Using these strategies, programmers can manipulate a blank canvas into a realistic topography. They are able to achieve this authenticity by coding in midpoint formulas and and meshing surfaces together.
An example of CGI damp fur.
There has also been a significant amount of effort put into creating realistic images of skin, cloth, and fur. Programmers have struggled to imitate the natural reactions of the materials to movement, but accuracy in their portrayal can be achieved down to 0.1 millimeters. Another application of CGI is in user-impacted formats such as flight simulators. These programs have to be designed so that functions performed by the user are accurately portrayed in the visualization.
One of the main fallbacks for CGI is its cost, both in time and money. Not only is it incredibly laborious, but according to Money Inc., "if a Game of Thrones episode has 10-minutes of CGI, which equates to $800,000." As CGI's applications and effectiveness continue to impress, producers may decide they are willing to pay more, which would be welcome news to many fans.
http://bgr.com/2016/06/29/game-of-thrones-battle-bastards-effects/
https://en.wikipedia.org/wiki/Computer-generated_imagery
http://moneyinc.com/much-costs-make-single-episode-game-thrones/
https://en.wikipedia.org/wiki/Fur
I was inspired to write about computer-generated imagery, or CGI, after watching the following video (definitely worth the watch if you are a fan) on the Game of Thrones episode "Battle of the Bastards." It focuses mainly on how producers could augment the scenes to drastically increase the scope of the shot from a few dozen horses and actors to full scale armies.
https://vimeo.com/172374044
CGI is usually used to create 3D images, although it can also be applied in 2D formats. I was surprised to see that even the backgrounds of many scenes shot with CGI that I anticipated would be authentic places were actually created with algorithms. Using these strategies, programmers can manipulate a blank canvas into a realistic topography. They are able to achieve this authenticity by coding in midpoint formulas and and meshing surfaces together.
An example of CGI damp fur.
There has also been a significant amount of effort put into creating realistic images of skin, cloth, and fur. Programmers have struggled to imitate the natural reactions of the materials to movement, but accuracy in their portrayal can be achieved down to 0.1 millimeters. Another application of CGI is in user-impacted formats such as flight simulators. These programs have to be designed so that functions performed by the user are accurately portrayed in the visualization.
One of the main fallbacks for CGI is its cost, both in time and money. Not only is it incredibly laborious, but according to Money Inc., "if a Game of Thrones episode has 10-minutes of CGI, which equates to $800,000." As CGI's applications and effectiveness continue to impress, producers may decide they are willing to pay more, which would be welcome news to many fans.
http://bgr.com/2016/06/29/game-of-thrones-battle-bastards-effects/
https://en.wikipedia.org/wiki/Computer-generated_imagery
http://moneyinc.com/much-costs-make-single-episode-game-thrones/
https://en.wikipedia.org/wiki/Fur
iRobot
iRobot's headquarters in Bedford, MA.
Founded by three MIT graduates in 1990, iRobot has become one of the leaders in robotic technology today. They have sold over 14 million home robots and over 5,000 are used in defense fields, such as with the military and police forces. Specifically, iRobot's PackBot has been used to assist in recovery efforts ranging from 9/11 to the Fukushima nuclear disaster. Some robots, such as the Seaglider, are able to operate underwater, drastically increasing the range of possibilities for the robots. Many of iRobot's earlier creations operated similarly to a common remote control car, but more recent home robots such as the Roomba operate autonomously.
The Roomba robot avoiding a staircase.
The Roomba, a vacuum cleaner, is iRobot's most popular invention with over 10 million units sold worldwide. Using only two wheels, the Roomba is able to navigate around obstacles and even dropoffs and detect dirty spots on the floor. When the bumper outfitted on the front of the Roomba detects it has run into something (an input), it will internally convey a command to change directions (an output), thus allowing the robot to run independent of human intervention. Unlike some other autonomous robotic vacuum cleaners, the Roomba does not map out rooms that it cleans; it instead operates by tracing walls and going at random angles until it encounters an obstacle. Newer and more expensive versions of Roomba's are also able to incorporate infrared technology. Because of this, the robots have another tool with which to detect obstacles, as well as a way to search for their charging base.
Although iRobot's original inventions were defense-centric, the company recently sold its military operations front to Arlington Capital Partners. This was so that they can focus more on consumer goods, so there should be many exciting developments to come.
https://en.wikipedia.org/wiki/Roomba
https://en.wikipedia.org/wiki/IRobot
http://www.irobot.com/About-iRobot/Company-Information/History.aspx
http://roboticsandautomationnews.com/2015/07/23/irobot-second-quarter-financial-results-exceed-expectations/921/
http://www.irobot.com/For-the-Home/Vacuuming/Roomba.aspx
iRobot's headquarters in Bedford, MA.
Founded by three MIT graduates in 1990, iRobot has become one of the leaders in robotic technology today. They have sold over 14 million home robots and over 5,000 are used in defense fields, such as with the military and police forces. Specifically, iRobot's PackBot has been used to assist in recovery efforts ranging from 9/11 to the Fukushima nuclear disaster. Some robots, such as the Seaglider, are able to operate underwater, drastically increasing the range of possibilities for the robots. Many of iRobot's earlier creations operated similarly to a common remote control car, but more recent home robots such as the Roomba operate autonomously.
The Roomba robot avoiding a staircase.
The Roomba, a vacuum cleaner, is iRobot's most popular invention with over 10 million units sold worldwide. Using only two wheels, the Roomba is able to navigate around obstacles and even dropoffs and detect dirty spots on the floor. When the bumper outfitted on the front of the Roomba detects it has run into something (an input), it will internally convey a command to change directions (an output), thus allowing the robot to run independent of human intervention. Unlike some other autonomous robotic vacuum cleaners, the Roomba does not map out rooms that it cleans; it instead operates by tracing walls and going at random angles until it encounters an obstacle. Newer and more expensive versions of Roomba's are also able to incorporate infrared technology. Because of this, the robots have another tool with which to detect obstacles, as well as a way to search for their charging base.
Although iRobot's original inventions were defense-centric, the company recently sold its military operations front to Arlington Capital Partners. This was so that they can focus more on consumer goods, so there should be many exciting developments to come.
https://en.wikipedia.org/wiki/Roomba
https://en.wikipedia.org/wiki/IRobot
http://www.irobot.com/About-iRobot/Company-Information/History.aspx
http://roboticsandautomationnews.com/2015/07/23/irobot-second-quarter-financial-results-exceed-expectations/921/
http://www.irobot.com/For-the-Home/Vacuuming/Roomba.aspx
Subscribe to:
Posts (Atom)