© 2024 WSKG

601 Gates Road
Vestal, NY 13850

217 N Aurora St
Ithaca, NY 14850

FCC LICENSE RENEWAL
FCC Public Files:
WSKG-FM · WSQX-FM · WSQG-FM · WSQE · WSQA · WSQC-FM · WSQN · WSKG-TV · WSKA
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations

WSKG thanks our sponsors...

Antibiotic Awareness, Bee Blunders, and Barbie Becomes a ‘Chatty Cathy’

At a magnification of 6,836x, this colorized scanning electron micrograph depicts a number of gram-negative “Escherichia coli” bacteria of the strain O157:H7. Image by Janice Haney Carr/Centers for Disease Control and Prevention

Listen to Science FridayNovember 20th 2-4pm on WSQX.Algorithms aren't impartial—they often have bias baked in. In this episode, a look at how we can ensure that machines scan our resumes and loan applications with a fair eye. Plus, the hard science in a bottle of hard cider, and the design challenge in improving hard-to-read transit maps.The World Health Organization launched its first World Antibiotic Awareness Week this past Monday. (The campaign runs until November 22, 2015.) The idea behind the initiative is to help the general public, as well as health workers and government agencies, understand how to prevent “the further emergence and spread of antibiotic resistance.” But do people even understand what “antibiotic resistance” is in the first place? Science writer Ed Yong explores this very question in a recent article for The Atlantic. Examining a report from The Wellcome Trust—a foundation in London—he describes how most people interviewed did not know what the term “antimicrobial resistance” means (one respondent said, “I need a dictionary for that”). And even worse, they fundamentally misunderstood that the bacteria—not their own bodies—are what develop resistance to a given drug. Yong discusses these findings and how to improve the way we talk about antibiotics. He also shares other selected short subjects in science, including how common insecticides may be turning bees into bad pollinators.Plus, if you played with Barbies growing up, did you ever chat with or confide in your plastic, smiling pal, wishing she could talk back? If so, Mattel’s recently released Hello Barbie (developed in conjunction with the company ToyTalk) is an answer to that childhood wish. The Wi-Fi enabled doll is programmed with 8,000 lines of dialogue, including positive statements about science (“The study of physics is incredible,” she exclaims). But its ability to record conversations is raising concerns about privacy, among other things. James Vlahos, a contributor to the New York Times Magazine who took an in-depth look at this artificially intelligent toy, joins Ira Flatow to discuss the good and bad of conversing with Barbara Millicent Roberts. Server room, from Shutterstock

Why Machines Discriminate—and How to Fix Them

Big data sets can perpetuate the same biases present in our culture, teaching machines to discriminate when scanning resumes and approving loans. Some believers in big data have claimed that, in big data sets, “ the numbers speak for themselves.” Or in other words, the more data available to them, the closer machines can get to achieving objectivity in their decision-making. But data researcher Kate Crawford says that’s not always the case, because big data sets can perpetuate the same biases present in our culture, teaching machines to discriminate when scanning resumes or approving loans, for example. And when algorithms do discriminate, computer scientist Suresh Venkatasubramanian says he tends to hear expressions of disbelief, such as, “Algorithms are just code—they only do what you tell them.” But the decisions that machine-learning algorithms spit out are a lot more complicated and opaque than people think, he says, which makes tracking down an offending line of code a near impossibility. One solution, he says, is to screen algorithms more rigorously, testing them on subsets of data to see if they produce the same high-quality results for different populations of people. And Crawford says it might be worth training computer scientists differently, too, in order to raise their awareness of the pitfalls of machine learning in regards to race, gender, bias, and discrimination. Segment Guests Kate Crawford is a principal researcher for Microsoft Research and a visiting professor at the MIT Center for Civic Media in Cambridge, Massachusetts.

Suresh Venkatasubramanian is an associate professor at the School of Computing at the University of Utah in Salt Lake City, Utah.

Related Links