China's leader, Mr. Xi Jinping, asserts every country's government is legitimate, even one like his that censors everything a person sees and says and uses facial recognition technology to monitor the activities of every citizen. There are numerous ramifications of acknowledging despotic governments that ignore human rights and theocratic governments that require all people to follow the same religious beliefs and practices deserve the same respect and fealty as governments founded on democratic principles.
Take the example of neurotechnologies capable of inserting electrodes into a brain to temporarily reduce the time it takes to memorize multiplication tables, a football playbook, or the codes and plans of a military enemy. Invasion into a brain also has other effects. Blood leakage into a brain's compartments from such an insert eventually reduces normal cell activities, such as memory and thinking. The impact on one brain function also can "cross talk" to impact other brain functions, such as the moral ability to discern right from wrong.
Some scientists devote themselves to technologies that enhance the individual, commercial, and military applications of human individuals, robots, and drones. Other humans use technology to binge-watch shows, socialize on smartphones, or order lipstick and mascara. Around the world, everyone has a stake in supporting governments devoted to: 1) promoting technologies that are good for society and 2) impeding the development and controlling the use of technologies that injure humans.
Showing posts with label facial recognition. Show all posts
Showing posts with label facial recognition. Show all posts
Monday, November 11, 2019
Friday, September 14, 2018
Real Imaginary Friends
Have you heard about digital personalities? Your teens and students already may know a new kind of avatar named Miquela Sousa. By 2020, trendwatching.com reports AI, facial recognition, emotional sensing, and other new technologies will create 5 billion virtual assistants and virtual companions or computer-generated influencers (CGI).
Marketers are able to tailor a perfect CGI for every marketing segment's sex, age, size, and passions. That's what Trevor McFearies and Sara De Cou are doing at Brud, an LA-based tech startup. Vogue's September, 2018 issue describes Miquela Sousa, the 19-year-old model and musician Brud based on current tastes and culture cues. Stylist Lucinda Chambers outfits Lil Miquela, as she is known since 2016 by her Instagram followers, in Alexander McQueen for a Vogue photo shoot. Miquela's interests are said to be: recording music, the politics of Alexandria Ocasio-Cortez, relapsing into tomboy clothes and activities, makeup tutorials on YouTube, and new Drake albums. She has blunt-cut bangs, straight dark hair past her shoulders, rather thick eyebrows over her brown eyes, full pouty lips, a slim but not skinny body, a pretty face speckled with freckles lightly covered with foundation a tad darker than medium.
What does a marketer want a susceptible young person to do after interacting with Miquela Sousa? Imitate her look, fashions, activities, and causes. The latter, in her case, are liberal.
It is easy to slip out of reality and get caught up imitating what a made-up CGI looks like, wears, does, and says. Too easy.
Marketers are able to tailor a perfect CGI for every marketing segment's sex, age, size, and passions. That's what Trevor McFearies and Sara De Cou are doing at Brud, an LA-based tech startup. Vogue's September, 2018 issue describes Miquela Sousa, the 19-year-old model and musician Brud based on current tastes and culture cues. Stylist Lucinda Chambers outfits Lil Miquela, as she is known since 2016 by her Instagram followers, in Alexander McQueen for a Vogue photo shoot. Miquela's interests are said to be: recording music, the politics of Alexandria Ocasio-Cortez, relapsing into tomboy clothes and activities, makeup tutorials on YouTube, and new Drake albums. She has blunt-cut bangs, straight dark hair past her shoulders, rather thick eyebrows over her brown eyes, full pouty lips, a slim but not skinny body, a pretty face speckled with freckles lightly covered with foundation a tad darker than medium.
What does a marketer want a susceptible young person to do after interacting with Miquela Sousa? Imitate her look, fashions, activities, and causes. The latter, in her case, are liberal.
It is easy to slip out of reality and get caught up imitating what a made-up CGI looks like, wears, does, and says. Too easy.
Tuesday, January 23, 2018
Flying Can Be Fun Again
Some airline passengers in the Caribbean, Singapore, and the United Arab Emirates, according to trendwatching.com, can begin to anticipate the glamorous experience flying was in years gone by. In Turkey, they'll also meet a new friend, Nely.
Vacationers touring in Barbados with Virgin Holidays will be able to put their casual flying clothes over their bathing suits and check out of their resort hotels early, because Virgin will pick them up, check their luggage, and take them to the beach. At oceanside, Virgin will provide boarding passes, a locker, beach towels, a showering facility, unlimited refreshments, and an air conditioned lounge area, while every last vacation moment merits a "Wish You Were Here" selfie home.
Visitors to Singapore's Changi Airport have walked among animatronic, remote-controlled butterflies designed to resemble the Diaethria Anna species. For kids, the airport's five-story playground offers climbing nets, a pole to slide down, and more for use for 50 at a time.
Before heading into the wild blue yonder from Dubai International Airport, passengers will be exploring the virtual blue aquarium surrounding them as they walk through a security tunnel to their flights in Terminal 3. To use the tunnel instead of traditional procedures, passengers pre-register at 3D face-scanning kiosks located throughout the airport. Watching the fish is expected to relax and entertain passengers as 80 hidden tunnel cameras scan visitors' faces from different angles. At the end of the tunnel, cleared travelers are sent on their way with a "Have a nice trip" message or a red sign alerts security. Dubai's airports process 80 million passengers now. The tunnel was developed to handle the increased volume of passengers, 124 million, expected by 2020. It should be mentioned that Dubai's virtual aquarium receives the same legal challenges that other facial recognition systems face.
At Turkey's Istanbul New Airport, a robot named Nely notes the expressions, ages, and genders of passengers before greeting them and making (or not making) small talk. Nely is, of course, travel-functional: booking flights for passengers, relaying information, and providing weather updates. Using AI, facial recognition, emotional analysis based on input from sociologists, voice capability, and a bar code reader, Nely even remembers passengers from previous interactions.
Vacationers touring in Barbados with Virgin Holidays will be able to put their casual flying clothes over their bathing suits and check out of their resort hotels early, because Virgin will pick them up, check their luggage, and take them to the beach. At oceanside, Virgin will provide boarding passes, a locker, beach towels, a showering facility, unlimited refreshments, and an air conditioned lounge area, while every last vacation moment merits a "Wish You Were Here" selfie home.
Visitors to Singapore's Changi Airport have walked among animatronic, remote-controlled butterflies designed to resemble the Diaethria Anna species. For kids, the airport's five-story playground offers climbing nets, a pole to slide down, and more for use for 50 at a time.
Before heading into the wild blue yonder from Dubai International Airport, passengers will be exploring the virtual blue aquarium surrounding them as they walk through a security tunnel to their flights in Terminal 3. To use the tunnel instead of traditional procedures, passengers pre-register at 3D face-scanning kiosks located throughout the airport. Watching the fish is expected to relax and entertain passengers as 80 hidden tunnel cameras scan visitors' faces from different angles. At the end of the tunnel, cleared travelers are sent on their way with a "Have a nice trip" message or a red sign alerts security. Dubai's airports process 80 million passengers now. The tunnel was developed to handle the increased volume of passengers, 124 million, expected by 2020. It should be mentioned that Dubai's virtual aquarium receives the same legal challenges that other facial recognition systems face.
At Turkey's Istanbul New Airport, a robot named Nely notes the expressions, ages, and genders of passengers before greeting them and making (or not making) small talk. Nely is, of course, travel-functional: booking flights for passengers, relaying information, and providing weather updates. Using AI, facial recognition, emotional analysis based on input from sociologists, voice capability, and a bar code reader, Nely even remembers passengers from previous interactions.
Sunday, June 25, 2017
Blind Trust in AI Is a Mistake
For better or worse, combining algorithms with images collected by drones, satellites, and video feeds from other monitors enhances aerial intelligence in a variety of fields.
Overhead movie and TV shots already provide a different perspective, just as viewing the Earth or a rocket launch from a space craft or satellite does. These new perspectives offer advantages besides entertainment value and a chance to study the dwindling ice cap at the North Pole.
Seen from above, data about landscapes has various applications. The famous Texas Gulf Sulphur Company case involving insider trading began with aerial geophysical surveys in eastern Canada. When pilots in planes scanning the ground saw the needles in their instruments going wild, they could pinpoint the possible location of electrically conductive sulphide deposits containing zinc and copper along with sulphur.
When Argentina invaded Britain's Falkland Islands in April, 1982, it's been reported the only map the defenders possessed showed perfect picnic spots. Planes took to the air to locate the landing spot that enabled British troops to declare victory at Port Stanley in June, 1982.
Nowadays, the aim is to write algorithms that look for certain activities among millions of images. A robber can program an algorithm to tell a drone's camera to identify where delivery trucks leave packages. An algorithm can call attention to a large group of people and cars arriving at a North Korean missile testing site. Then, an analyst can figure out why, because, to date, artificial intelligence (AI) does not explain how and why it reaches a conclusion.
Since artificial intelligence's algorithms operate in their own "black boxes," humans are unable to evaluate the process used to arrive at conclusions. Humans cannot replicate AI processes independently. And if an algorithm makes a mistake, AI provides no clues to the reasoning that went astray.
In other words, robots without supervision can take actions based on conclusions dictated by faulty algorithms. An early attempt to treat patients based on a "machine model" provides a good example. Doctors treating pneumonia patients who also have asthma admit them to the hospital immediately, but the machine readout said to send them home. The "machine" saw pneumonia/asthma patients in the hospital recovered quickly and decided they had no reason to be admitted in the first place. The "machine" did not have the information that their rapid recovery occurred, because they were admitted to the hospital's intensive care unit.
Google's top artificial intelligence expert, John Giannandrea, speaking at a conference on the relationship between humans and AI, emphasized the effect of bias in algorithms. Not only does it affect the news and ads social media allows us to see, but he also echoed the idea that AI bias can determine the kind of medical treatment a person receives and, based on AI's predictions about the likelihood of a convict committing future offenses, it can affect a judge's decision regarding parole.
Joy Buolamwini's Algorithmic Justice League found facial-analysis software was prone to making mistakes recognizing the female gender, especially of darker-skinned women. AI is developed by and often tested primarily on light-skinned men, but recognition technology, for example, is promoted for hiring, policing, and military applications involving diverse populations. Since facial recognition screening fails to provide clear identifications of some populations, it also has the potential to be used to identify non-white suspects and to discriminate against hiring non-white employees.
When humans know they are dealing with imperfect information, whether they are playing poker, treating cancer, choosing a stock, catching a criminal, or waging war, how can they have confidence in authorizing and repeating a "black box" solution that requires blind trust? Who would take moral and legal responsibility for a mistake. The human who authorized action based on AI, wrote the algorithm, or determined the data base the algorithm used to determine its conclusion? And then there is the question of the moral and legal responsibility for a robot that malfunctions while it is carrying out the "right" conclusion.
Research is trying to determine what elements are necessary to help AI reach the best conclusions. Statistics can't always be trusted. Numbers that show terrorists are Muslims or repeat criminals are African Americans do nothing to suggest how an individual Muslim or African American should be screened or treated. AI research is further complicated by findings that also suggest the mind/intellect and will that control moral values and actions are separate from the physical brain that controls other human activities and diseases such as epilepsy and Parkinson's.
Automated solutions require new safeguards: to defend against hacking that alters information, to eliminate bias, to verify accuracy by checking multiple sources, and to determine accountability and responsibility for actions.
Overhead movie and TV shots already provide a different perspective, just as viewing the Earth or a rocket launch from a space craft or satellite does. These new perspectives offer advantages besides entertainment value and a chance to study the dwindling ice cap at the North Pole.
Seen from above, data about landscapes has various applications. The famous Texas Gulf Sulphur Company case involving insider trading began with aerial geophysical surveys in eastern Canada. When pilots in planes scanning the ground saw the needles in their instruments going wild, they could pinpoint the possible location of electrically conductive sulphide deposits containing zinc and copper along with sulphur.
When Argentina invaded Britain's Falkland Islands in April, 1982, it's been reported the only map the defenders possessed showed perfect picnic spots. Planes took to the air to locate the landing spot that enabled British troops to declare victory at Port Stanley in June, 1982.
Nowadays, the aim is to write algorithms that look for certain activities among millions of images. A robber can program an algorithm to tell a drone's camera to identify where delivery trucks leave packages. An algorithm can call attention to a large group of people and cars arriving at a North Korean missile testing site. Then, an analyst can figure out why, because, to date, artificial intelligence (AI) does not explain how and why it reaches a conclusion.
Since artificial intelligence's algorithms operate in their own "black boxes," humans are unable to evaluate the process used to arrive at conclusions. Humans cannot replicate AI processes independently. And if an algorithm makes a mistake, AI provides no clues to the reasoning that went astray.
In other words, robots without supervision can take actions based on conclusions dictated by faulty algorithms. An early attempt to treat patients based on a "machine model" provides a good example. Doctors treating pneumonia patients who also have asthma admit them to the hospital immediately, but the machine readout said to send them home. The "machine" saw pneumonia/asthma patients in the hospital recovered quickly and decided they had no reason to be admitted in the first place. The "machine" did not have the information that their rapid recovery occurred, because they were admitted to the hospital's intensive care unit.
Google's top artificial intelligence expert, John Giannandrea, speaking at a conference on the relationship between humans and AI, emphasized the effect of bias in algorithms. Not only does it affect the news and ads social media allows us to see, but he also echoed the idea that AI bias can determine the kind of medical treatment a person receives and, based on AI's predictions about the likelihood of a convict committing future offenses, it can affect a judge's decision regarding parole.
Joy Buolamwini's Algorithmic Justice League found facial-analysis software was prone to making mistakes recognizing the female gender, especially of darker-skinned women. AI is developed by and often tested primarily on light-skinned men, but recognition technology, for example, is promoted for hiring, policing, and military applications involving diverse populations. Since facial recognition screening fails to provide clear identifications of some populations, it also has the potential to be used to identify non-white suspects and to discriminate against hiring non-white employees.
When humans know they are dealing with imperfect information, whether they are playing poker, treating cancer, choosing a stock, catching a criminal, or waging war, how can they have confidence in authorizing and repeating a "black box" solution that requires blind trust? Who would take moral and legal responsibility for a mistake. The human who authorized action based on AI, wrote the algorithm, or determined the data base the algorithm used to determine its conclusion? And then there is the question of the moral and legal responsibility for a robot that malfunctions while it is carrying out the "right" conclusion.
Research is trying to determine what elements are necessary to help AI reach the best conclusions. Statistics can't always be trusted. Numbers that show terrorists are Muslims or repeat criminals are African Americans do nothing to suggest how an individual Muslim or African American should be screened or treated. AI research is further complicated by findings that also suggest the mind/intellect and will that control moral values and actions are separate from the physical brain that controls other human activities and diseases such as epilepsy and Parkinson's.
Automated solutions require new safeguards: to defend against hacking that alters information, to eliminate bias, to verify accuracy by checking multiple sources, and to determine accountability and responsibility for actions.
Wednesday, November 16, 2016
The Challenge of New Technologies: Prepare to Think
IBM recognized what the future would require by showing the lack of space planned for the "K" slipping down the side of its "THINK" signs. The need to think was on display at last night's poster and presentation session given by high school students who spent their summer in science labs and departments at the University of Wisconsin.
Students needed to be willing to expend a major effort just preparing for their experiments. One young woman dragged branches, plants, and flowers to the lab to find that birds need to be motivated by an attractive, secure area in order to breed. Multiple times a young man rowed a boat into the middle of a lake at night in order to scoop up water that showed what destroyed undesirable algae multiplied faster than the invasive species that destroyed the helpful algae remover. Another student had to find a sausage factory where he could procure the pig livers he needed to test how their properties changed during heating in a microwave. Various purifying procedures were needed before testing and careful math calculations were needed before a machine could emit radiation to attack tumors. Findings, such as the dangers of the toxic nano particles lithium batteries give off as they decompose, were preliminary but important.
Heading into the future, artificial intelligence (AI); robotics, CRISPR and other medical technologies; the relationship of technology, human values, and public policy; and other technical subjects will play a major role in lives throughout the world. Yet in recent elections, electorates have cast their votes based on emotion: anger about the rich who are getting richer while they're not, anger about their countries filling up with people who don't look like them, and anger about a perceived attack on their values.
Away from the disillusioned voters back home, members of the World Economic Forum (weforum.org) met in Dubai, United Arab Emirates, this week to discuss the impact of new technologies. Their discussions need to make it back home to those have to understand how they will be affected by the good and bad impacts these technologies will have on their lives.
However, you can't help but sympathize with anyone who tries to deal with the complexity and scientific jargon in an article about a technology, such as CRISPR-Cas9. First there is a description. CRISPR-Cas9 can genetically edit cells to improve crops and fight disease. In humans, if used to alter the genetic make-up of cells in an egg, sperm, or embryo, the same mutation will be transmitted from generation to generation. In order for the latter process to work, genes injected from outside need to be accepted by cells that store the germline, the biochemical unit of heredity.
Then, articles tout the benefits of the new technology. Pig organs could be produced without the genes that prevent transplants in humans. Malaria-carrying mosquitoes could be eliminated the way genetically altered Atlantic salmon already grow double the size of ordinary salmon in half the time. Diseases could be cured, even though the complex interrelationship of genes often makes this unlikely in many cases.
Articles frequently ignore problems associated with new technologies. It is up to the reader to ask, "Couldn't a rogue scientist use CRISPR-Cas9 to inject unhealthy mutations into human cells that would be transmitted from generation to generation?" Or might only wealthy people be able: to afford the cures that CRISPR-Cas9 technology could provide. While CRISPR-altered seeds produce uniform crops that can be harvested by machines, farmers in poor countries may not be able to pay for the annual purchase of patented hybrid seeds that grow food in drought conditions.
Some call the biomedical duel between China and the United States to achieve dramatic CRISPR-Cas9 results "Sputnik 2.0." On October 18, 2016 scientists at Sichuan University in Chengdu, China, used CRISPR-Cas9 technology to see whether they could disable a gene in the patient's immune cells and reprogram the lung cancer patient's cells not only to resist but to fight back against the cancer. To date, results of the test are not known and neither are side effects. At the University of Pennsylvania in Philadelphia, Dr. Carl June also is about to use CRISPR editing to enable three genes in the immune cells of 18 cancer patients, who have not been helped by other treatments, to seek and destroy their cancerous tumors.
Guarding against technology bias also needs to keep up with fast-paced artificial intelligence (AI) developments. Joy Buolamwini's Algorithmic Justice League found facial-analysis software was prone to making mistakes recognizing the female gender, especially of darker-skinned women. AI is developed by and often tested primarily on light-skinned men, but recognition technology, for example, is promoted for hiring, policing, and military applications involving diverse populations.
Finally, we all need to think about and act on the guidelines, regulations, and other checks needed to keep up with the effects of rapidly progressing new technologies.
Students needed to be willing to expend a major effort just preparing for their experiments. One young woman dragged branches, plants, and flowers to the lab to find that birds need to be motivated by an attractive, secure area in order to breed. Multiple times a young man rowed a boat into the middle of a lake at night in order to scoop up water that showed what destroyed undesirable algae multiplied faster than the invasive species that destroyed the helpful algae remover. Another student had to find a sausage factory where he could procure the pig livers he needed to test how their properties changed during heating in a microwave. Various purifying procedures were needed before testing and careful math calculations were needed before a machine could emit radiation to attack tumors. Findings, such as the dangers of the toxic nano particles lithium batteries give off as they decompose, were preliminary but important.
Heading into the future, artificial intelligence (AI); robotics, CRISPR and other medical technologies; the relationship of technology, human values, and public policy; and other technical subjects will play a major role in lives throughout the world. Yet in recent elections, electorates have cast their votes based on emotion: anger about the rich who are getting richer while they're not, anger about their countries filling up with people who don't look like them, and anger about a perceived attack on their values.
Away from the disillusioned voters back home, members of the World Economic Forum (weforum.org) met in Dubai, United Arab Emirates, this week to discuss the impact of new technologies. Their discussions need to make it back home to those have to understand how they will be affected by the good and bad impacts these technologies will have on their lives.
However, you can't help but sympathize with anyone who tries to deal with the complexity and scientific jargon in an article about a technology, such as CRISPR-Cas9. First there is a description. CRISPR-Cas9 can genetically edit cells to improve crops and fight disease. In humans, if used to alter the genetic make-up of cells in an egg, sperm, or embryo, the same mutation will be transmitted from generation to generation. In order for the latter process to work, genes injected from outside need to be accepted by cells that store the germline, the biochemical unit of heredity.
Then, articles tout the benefits of the new technology. Pig organs could be produced without the genes that prevent transplants in humans. Malaria-carrying mosquitoes could be eliminated the way genetically altered Atlantic salmon already grow double the size of ordinary salmon in half the time. Diseases could be cured, even though the complex interrelationship of genes often makes this unlikely in many cases.
Articles frequently ignore problems associated with new technologies. It is up to the reader to ask, "Couldn't a rogue scientist use CRISPR-Cas9 to inject unhealthy mutations into human cells that would be transmitted from generation to generation?" Or might only wealthy people be able: to afford the cures that CRISPR-Cas9 technology could provide. While CRISPR-altered seeds produce uniform crops that can be harvested by machines, farmers in poor countries may not be able to pay for the annual purchase of patented hybrid seeds that grow food in drought conditions.
Some call the biomedical duel between China and the United States to achieve dramatic CRISPR-Cas9 results "Sputnik 2.0." On October 18, 2016 scientists at Sichuan University in Chengdu, China, used CRISPR-Cas9 technology to see whether they could disable a gene in the patient's immune cells and reprogram the lung cancer patient's cells not only to resist but to fight back against the cancer. To date, results of the test are not known and neither are side effects. At the University of Pennsylvania in Philadelphia, Dr. Carl June also is about to use CRISPR editing to enable three genes in the immune cells of 18 cancer patients, who have not been helped by other treatments, to seek and destroy their cancerous tumors.
Guarding against technology bias also needs to keep up with fast-paced artificial intelligence (AI) developments. Joy Buolamwini's Algorithmic Justice League found facial-analysis software was prone to making mistakes recognizing the female gender, especially of darker-skinned women. AI is developed by and often tested primarily on light-skinned men, but recognition technology, for example, is promoted for hiring, policing, and military applications involving diverse populations.
Finally, we all need to think about and act on the guidelines, regulations, and other checks needed to keep up with the effects of rapidly progressing new technologies.
Labels:
AI,
artificial intelligence,
cells,
China,
CRISPR-Cas9,
experiments,
facial recognition,
lithium batteries,
nano particles,
radiation,
robots,
science,
seeds,
technology,
thinking,
values
Subscribe to:
Posts (Atom)