Google adds auto-transcription and simplified grading to its education tools

Many students have returned to in-person classes, but that isn’t stopping Google from making online education more viable. The company has updated Classroom and Workspace for Education with a host of features that improve life for teachers and students alike. In Workspace, for instance, you can now auto-transcribe Meet calls directly into Google Docs — helpful if you want to quickly produce lesson material or help students catch up when they miss lectures. You can also host polls and Q&A sessions in Meet sessions, livestream public events (think school assemblies) to YouTube and use picture-in-picture to manage class presentations without losing sight of your pupils.

Teachers using Classroom, meanwhile, now have access to previously beta-only add-on support that extends functionality beyond what Google can offer. You can get an EdPuzzle add-on to automatically integrate and grade assignments, while a Pear Deck extension can create assignments using lessons from the Pear Deck library. The Classroom updates also make it easier to add YouTube videos to lessons, export grades and get updates through email notifications. An update later in 2022 will let teachers reply directly to students from Gmail notifications.

Google is expanding access to its Read Along app, too. It’s rolling out a beta for a new web version over the next month, so students might not need to lean on their phones as they improve their literacy skills.

The announcements come alongside Chrome OS updates that include improved casting and optimizing educational apps like Figma. Although these updates might not matter much as the pandemic (hopefully) winds down, they could still be useful as schools increasingly rely on internet-based lessons and coursework.

Hitting the Books: Newton’s alchemical dalliances make him no less of a scientist

The modern world as we know it simply would not exist if not for the mind of Sir Isaac Newton. His synthesis of differential calculus and pioneering research on the nature of gravity and light are bedrocks of the scientific method. However in his later years, Newton’s interests were admittedly drawn towards a decidedly non-scientific subject, alchemy. Does that investigation invalidate Newton’s earlier achievement, asks theoretical physicist and philosopher, Carlo Rovelli in the excerpt below. His new book of correspondence and musings, There Are Places in the World Where Rules Are Less Important than Kindness: And Other Thoughts on Physics, Philosophy and the World, Rovelli explores themes spanning from science to history to politics and philosophy.  

There Are Places in the World Where Rules Are Less Important Than Kindness
Riverhead Books

From THERE ARE PLACES IN THE WORLD WHERE RULES ARE LESS IMPORTANT THAN KINDNESS: And Other Thoughts on Physics, Philosophy and the World by Carlo Rovelli published on May 10, 2022 by Riverhead, an imprint of Penguin Publishing Group, a division of Penguin Random House LLC. Copyright © 2022 Carlo Rovelli.


In 1936 Sotheby’s puts up for auction a collection of unpublished writings by Sir Isaac Newton. The price is low, £9,000; not much when compared to the £140,000 raised that season from the sale of a Rubens and a Rembrandt. Among the buyers is John Maynard Keynes, the famous economist, who was a great admirer of Newton. Keynes soon realizes that a substantial part of the manuscript writings deal with a subject that few would have expected Newton to be interested in. Namely: alchemy. Keynes sets out to acquire all of Newton’s unpublished writings on the subject, and soon realizes further that alchemy was not something that the great scientist was marginally or briefly curious about: his interest in it lasted throughout his life. “Newton was not the first of the Age of Reason,” Keynes concludes, “he was the last of the magicians.” 

In 1946 Keynes donated his unpublished Newtoniana to the University of Cambridge. The strangeness of Newton in alchemical guise, seemingly so at odds with the traditional image of him as the father of science, has caused the majority of historians to give the subject a wide berth. Only recently has interest in his passion for alchemy grown. Today a substantial amount of Newton’s alchemical texts have been put online by researchers at Indiana University and are now accessible to everyone. Their existence still has the capacity to provoke discussion, and to cast a confusing light over his legacy. 

Newton is central to modern science. He occupies this preeminent place because of his exceptional scientific results: mechanics, the theory of universal gravity, optics, the discovery that white light is a mixture of colors, differential calculus. Even today, engineers, physicists, astronomers and chemists work with equations written by him, and use concepts that he first introduced. But even more important than all this, Newton was the founder of the very method of seeking knowledge that today we call modern science. He built upon the work and ideas of others — Descartes, Galileo, Kepler, etc — extending a tradition that goes back to antiquity; but it is in his books that what we now call the scientific method found its modern form, immediately producing a mass of exceptional results. It is no exaggeration to think of Newton as the father of modern science. So, what on earth does alchemy have to do with any of this? 

There are those who have seen in these anomalous alchemical activities evidence of mental infirmity brought on by premature aging. There are others who have served their own ends by attempting to enlist the great Englishman among critics of the limitations of scientific rationality. 

I think things are much simpler than this. 

The key lies in the fact that Newton never published anything on alchemy. The papers that show his interest in the subject are extensive, but they are all unpublished. This lack of publication has been interpreted as a consequence of the fact that alchemy had been illegal in England since as early as the fourteenth century. But the law prohibiting alchemy was lifted in 1689. And besides, if Newton had been so worried about going against laws and conventions, he would not have been Newton. There are those who have portrayed him as some kind of demonic figure attempting to glean extraordinary and ultimate knowledge that he wanted to keep exclusively for himself, to enhance his own power. But Newton really had made extraordinary discoveries, and had not sought to keep those to himself: he published them in his great books, including the Principia, with the equations of mechanics still used today by engineers to build airplanes and edifices. Newton was renowned and extremely well respected during his adult life; he was president of the Royal Society the world’s leading scientific body. The intellectual world was hungry for his results. Why did he not publish anything based on all those alchemical activities?

The answer is very simple, and I believe that it dispels the whole enigma: he never published anything because he never arrived at any results that he found convincing. Today it is easy to rely on the well-digested historical judgment that alchemy had theoretical and empirical foundations that were far too weak. It wasn’t quite so easy to reach this conclusion in the seventeenth century. Alchemy was widely practiced and studied by many, and Newton genuinely tried to understand whether it contained a valid form of knowledge. If he had found in alchemy something that could have withstood the method of rational and empirical investigation that he himself was promoting, there can be no doubt that Newton would have published his results. If he had succeeded in extracting from the disorganized morass of the alchemical world something that could have become science, then we would surely have inherited a book by Newton on the subject, just as we have books by him on optics, mechanics and universal gravity. He did not manage to do this, and so he published nothing.

Was it a vain hope in the first place? Was it a project that should have been discarded even before it began? On the contrary: many of the key problems posed by alchemy, and quite a few of the methods it developed, in particular with reference to the transformation of one chemical substance into another, are precisely the problems that would soon give rise to the new discipline of chemistry. Newton does not manage to take the critical step between alchemy and chemistry. That would be down to scientists of the next generation, such as Lavoisier, to achieve. 

The texts put online by Indiana University show this clearly. It is true that the language used is typically alchemical: metaphors and allusions, veiled phrases and strange symbols. But many of the procedures described are nothing more than simple chemical processes. For example, he describes the production of “oil of vitriol” (sulfuric acid), aqua fortis (nitric acid) and “spirit of salt” (hydrochloric acid). By following Newton’s instructions, it is possible to synthesize these substances. The very name that Newton used to refer to his attempts at doing so is a suggestive one: “chymistry.” Late, post-­Renaissance alchemy strongly insisted on the experimental verification of ideas. It was already beginning to face in the direction of modern chemistry. Newton understands that somewhere within the confused miasma of alchemical recipes there is a modern science (in the “Newtonian” sense) hidden, and he tries to encourage its emergence. He spends a great deal of time immersed in it, but he doesn’t succeed in finding the thread that will untie the bundle, and so publishes nothing.

Alchemy was not Newton’s only strange pursuit and passion. There is another one that emerges from his papers that is perhaps even more intriguing: Newton put enormous effort into reconstructing biblical chronology, attempting to assign precise dates to events written about in the holy book. Once again, from the evidence of his papers, the results were not great: the father of science estimates that the beginning of the world happened just a few thousand years ago. Why did Newton lose himself in this pursuit?

History is an ancient subject. Born in Miletus with Hecataeus, it is already fully grown with Herodotus and Thucydides. There is a continuity between the work of historians of today and those of antiquity: principally in that critical spirit that is necessary when gathering and evaluating the traces of the past. (The book of Hecataeus begins thus: “I write things that seem to me to be true. For the tales of the Greeks are many and laugh‑ able as they seem to me.”) But contemporary historiography has a quantitative aspect linked to the crucial effort to establish the precise dates of past events. Furthermore, the critical work of a modern historian must take into account all the sources, evaluating their reliability and weighing the relevance of information furnished. The most plausible reconstruction emerges from this practice of evaluation and of weighted integration of the sources. Well, this quantitative way of writing history begins with Newton’s work on biblical chronology. In this case too, Newton is on the track of something profoundly modern: to find a method for the rational reconstruction of the dating of ancient history based on the multiple, incomplete and variably reliable sources that we have at our disposal. Newton is the first to introduce concepts and methods that will later become important, but he did not arrive at results that were sufficiently satisfactory, and once again he publishes nothing on the subject. 

In both cases we are not dealing with something that should cause us to deviate from our traditional view of the rationalistic Newton. On the contrary, the great scientist is struggling with real scientific problems. There is no trace of a Newton who would confuse good science with magic, or with untested tradition or authority. The reverse is true; he is the prescient modern scientist who confronts new areas of scientific inquiry clear-sighted, publishing when he succeeds in arriving at clear and important results, and not publishing when he does not arrive at such results. He was brilliant, the most brilliant—but he also had his limits, like everyone else.

I think that the genius of Newton lay precisely in his being aware of these limits: the limits of what he did not know. And this is the basis of the science that he helped to give birth to.

Hitting the Books: What the ‘Work from Home’ revolution means for those who can’t

The COVID-19 pandemic changed how we live, how we work, how we get from where we live to where we work or even if we have to leave where we live to get to where we work. But the number of workers that have had their commutes shortened from 45 minutes t…

Masterclass offers US college students three months of access for $1

US college students who want to keep learning over the summer break might be interested in checking out the latest promotion from Masterclass. They can get a three-month individual membership for just $1.

Masterclass hosts video-centric classes from notable, successful figures including Lewis Hamilton, Gordon Ramsay, Anna Wintour, Spike Lee and Serena Williams. The company says it now has more than 2,500 lessons on topics including gardening, writing, filmmaking, business leadership, wilderness survival and interior design.

You’ll need a .edu email address and to meet a few other criteria, such as being a current student enrolled in a college or university program in the US. Masterclass says your promotional plan won’t auto-renew. Once it expires, you’ll have the option of continuing your membership at the regular price. The plans start at $15 per month.

It’s not quite as great a deal as the one year of access for $1 Masterclass offered students in 2020. Still, it’s a low-cost way to try the service and learn a thing or two.

You’ll need to act swiftly if you’re interested. Masterclass says there’s a limited supply of promotional memberships available and the offer expires at 11:59PM PT tonight. You can sign up for the so-called Summer of Learning via a dedicated page on the Masterclass website.

Follow @EngadgetDeals on Twitter for the latest tech deals and buying advice.

Niantic’s Campfire app will finally let ‘Pokémon Go’ players chat together

Move over Discord, Niantic has its own messaging solution in mind for Pokémon Go players: a social AR app called Campfire. It’ll let you organize for events, discover new locations and share content with other players. Think of it like a hyper-local so…

Hitting the Books: How winning the lottery is a lot like being re-struck by lightning

A wise man once said, “never tell me the odds” but whether you’re calculating the chances of successfully navigating an asteroid field (3,720:1), shouting “Shazam” and having it work twice in a row (9 million:1), or winning the state lottery (42 million:1 in California), probabilities influence outcomes in our daily lives for events large and small alike. But for the widespread role they play in our lives, your average person is usually just pretty ok with accurately calculating them. As we see in the excerpt below from James C. Zimring’s latest title, Partial Truths: How Fractions Distort Our Thinking, our expectations regarding the likelihood of an event occurring can shift, depending on how the question is posed and which fraction is focused upon.

partial truths cover
Columbia University Press

Excerpted from Partial Truths: How Fractions Distort Our Thinking by James C. Zimring, published by Columbia Business School Publishing. Copyright (c) 2022 James C. Zimring. Used by arrangement with the Publisher. All rights reserved.


Mistaking the Likely for the Seemingly Impossible: Misjudging the Numerator

The more unlikely an event seems, the more it draws our attention when it does occur and the more compelled we feel to explain why it happened. This just makes good sense. If the world is not behaving according to the rules we understand, perhaps we misunderstand the rules. Our attention should be drawn to unlikely occurrences because new knowledge comes from our attempts to understand contradictions.

Sometimes what seems to be impossible is actually highly probable. A famous example of this is found with playing the lottery (i.e., the lottery fallacy). It is well understood that it is incredibly unlikely that any particular person will win the lottery. For example, the chance of any one ticket winning the Powerball lottery (the particular lottery analyzed in this chapter) is 1/292,000,000. This explains why so much attention is paid to the winners. Where did they buy their ticket? Did they see a fortune teller before buying their ticket, or do they have a history of showing psychic abilities? Do they have any special rituals they carry out before buying a ticket? It is a natural tendency to try to explain how such an unlikely event could have occurred. If we can identify a reason, then perhaps understanding it will help us win the lottery, too.

The lottery fallacy is not restricted to good things happening. Explanations also are sought to explain bad things. Some people are struck by lightning more than once, which seems just too unlikely to accept as random chance. There must be some explanation. Inevitably, it is speculated that the person may have some weird mutant trait that makes them attract electricity, or they carry certain metals on their person or have titanium prosthetics in their body. Perhaps they have been cursed by a mystical force or God has forsaken them.

The lottery fallacy can be understood as a form of mistaking one probability for another, or to continue with our theme from part 1, to mistake one fraction for another. One can express the odds of winning the lottery as the fraction (1/292,000,000), in which the numerator is the single number combination that wins and the denominator is all possible number combinations. The fallacy arises because we tend to notice only the one person with the one ticket who won the lottery. This is not the only person playing the lottery, however, and it is not the only ticket. How many tickets are purchased for any given drawing? The exact number changes, because more tickets are sold when the jackpot is higher; however, a typical drawing includes about 300 million tickets sold. Of course, some of the tickets sold must be duplicates, given that only 292 million combinations are possible. Moreover, if every possible combination were being purchased, then someone would win every drawing. In reality, about 50 percent of the drawings have a winner; thus, we can infer that, on average, 146 million different number combinations are purchased.

Of course, the news does not give us a list of all the people who did not win. Can you imagine the same headline every week, “299,999,999 People Failed to Win the Lottery, Again!” (names listed online at www.thisweekslosers.com). No, the news only tells us that there was a winner, and sometimes who the winner was. When we ask ourselves, “What are the odds of that person winning?” we are asking the wrong question and referring to the wrong fraction. The odds of that particular person winning are 1/292,000,000. By chance alone, that person should win the lottery once every 2,807,692 years that they consistently play (assuming two drawings per week). What we should be asking is “What are the odds of any person winning?”

In probability, the chances of either one thing or another thing happening are the sum of the individual probabilities. So, assuming no duplicate tickets, if only a single person were playing the lottery, then the odds of having a winner are 1/292,000,000. If two people are playing, the odds of having a winner are 2/292,000,000. If 1,000 people are playing, then the odds are 1,000/292,000,000. Once we consider that 146 million different number combinations are purchased, the top of the fraction (numerator) becomes incredibly large, and the odds that someone will win are quite high. When we marvel at the fact that someone has won the lottery, we mistake the real fraction (146,000,000/292,000,000) for the fraction (1/292,000,000) — that is, we are misjudging the numerator. What seems like an incredibly improbable event is actually quite likely. The human tendency to make this mistake is related to the availability heuristic, as described in chapter 2. Only the winner is “available” to our minds, and not all the many people who did not win.

Similarly, the odds of twice being struck by lightning over the course of one’s life are one in nine million. Because 7.9 billion people live on Earth, it is probable that 833 people will be hit by lightning twice in their lives (at least). As with the lottery example, our attention is drawn only to those who are struck by lightning. We fail to consider how many people never get struck. Just as it is unlikely that any one particular person will win the Powerball lottery, it is highly unlikely that no one will win the lottery after a few drawings, just given the number of people playing. Likewise, it is very unlikely that any one person will be twice hit by lightning, but it is even more unlikely that no one will, given the number of people in the world.

So, when we puzzle over such amazing things as someone winning the lottery or being twice struck by lightning, we actually are trying to explain why a highly probable thing happened, which really requires no explanation at all. The rules of the world are working exactly as we understand them, but we are mistaking the highly likely for the virtually impossible.

Hitting the Books: Why we need to treat the robots of tomorrow like tools

Do not be swayed by the dulcet dial-tones of tomorrow’s AIs and their siren songs of the singularity. No matter how closely artificial intelligences and androids may come to look and act like humans, they’ll never actually be humans, argue Paul Leonardi, Duca Family Professor of Technology Management at University of California Santa Barbara, and Tsedal Neeley, Naylor Fitzhugh Professor of Business Administration at the Harvard Business School, in their new book The Digital Mindset: What It Really Takes to Thrive in the Age of Data, Algorithms, and AI — and therefore should not be treated like humans. The pair contends in the excerpt below that in doing so, such hinders interaction with advanced technology and hampers its further development.

Digital Mindset cover
Harvard Business Review Press

Reprinted by permission of Harvard Business Review Press. Excerpted from THE DIGITAL MINDSET: What It Really Takes to Thrive in the Age of Data, Algorithms, and AI by Paul Leonardi and Tsedal Neeley. Copyright 2022 Harvard Business School Publishing Corporation. All rights reserved.


Treat AI Like a Machine, Even If It Seems to Act Like a Human

We are accustomed to interacting with a computer in a visual way: buttons, dropdown lists, sliders, and other features allow us to give the computer commands. However, advances in AI are moving our interaction with digital tools to more natural-feeling and human-like interactions. What’s called a conversational user interface (UI) gives people the ability to act with digital tools through writing or talking that’s much more the way we interact with other people, like Burt Swanson’s “conversation” with Amy the assistant. When you say, “Hey Siri,” “Hello Alexa,” and “OK Google,” that’s a conversational UI. The growth of tools controlled by conversational UIs is staggering. Every time you call an 800 number and are asked to spell your name, answer “Yes,” or say the last four numbers of your social security number you are interacting with an AI that uses conversational UI. Conversational bots have become ubiquitous in part because they make good business sense, and in part because they allow us to access services more efficiently and more conveniently.

For example, if you’ve booked a train trip through Amtrak, you’ve probably interacted with an AI chatbot. Its name is Julie, and it answers more than 5 million questions annually from more than 30 million passengers. You can book rail travel with Julie just by saying where you’re going and when. Julie can pre-fill forms on Amtrak’s scheduling tool and provide guidance through the rest of the booking process. Amtrak has seen an 800 percent return on their investment in Julie. Amtrak saves more than $1 million in customer service expenses each year by using Julie to field low-level, predictable questions. Bookings have increased by 25 percent, and bookings done through Julie generate 30 percent more revenue than bookings made through the website, because Julie is good at upselling customers!

One reason for Julie’s success is that Amtrak makes it clear to users that Julie is an AI agent, and they tell you why they’ve decided to use AI rather than connect you directly with a human. That means that people orient to it as a machine, not mistakenly as a human. They don’t expect too much from it, and they tend to ask questions in ways that elicit helpful answers. Amtrak’s decision may sound counterintuitive, since many companies try to pass off their chatbots as real people and it would seem that interacting with a machine as though it were a human should be precisely how to get the best results. A digital mindset requires a shift in how we think about our relationship to machines. Even as they become more humanish, we need to think about them as machines— requiring explicit instructions and focused on narrow tasks.

x.ai, the company that made meeting scheduler Amy, enables you to schedule a meeting at work, or invite a friend to your kids’ basketball game by simply emailing Amy (or her counterpart, Andrew) with your request as though they were a live personal assistant. Yet Dennis Mortensen, the company’s CEO, observes that more than 90 percent of the inquiries that the company’s help desk receives are related to the fact that people are trying to use natural language with the bots and struggling to get good results.

Perhaps that was why scheduling a simple meeting with a new acquaintance became so annoying to Professor Swanson, who kept trying to use colloquialisms and conventions from informal conversation. In addition to the way he talked, he made many perfectly valid assumptions about his interaction with Amy. He assumed Amy could understand his scheduling constraints and that “she” would be able to discern what his preferences were from the context of the conversation. Swanson was informal and casual—the bot doesn’t get that. It doesn’t understand that when asking for another person’s time, especially if they are doing you a favor, it’s not effective to frequently or suddenly change the meeting logistics. It turns out it’s harder than we think to interact casually with an intelligent robot.

Researchers have validated the idea that treating machines like machines works better than trying to be human with them. Stanford professor Clifford Nass and Harvard Business School professor Youngme Moon conducted a series of studies in which people interacted with anthropomorphic computer interfaces. (Anthropomorphism, or assigning human attributes to inanimate objects, is a major issue in AI research.) They found that individuals tend to overuse human social categories, applying gender stereotypes to computers and ethnically identifying with computer agents. Their findings also showed that people exhibit over-learned social behaviors such as politeness and reciprocity toward computers. Importantly, people tend to engage in these behaviors — treating robots and other intelligent agents as though they were people — even when they know they are interacting with computers, rather than humans. It seems that our collective impulse to relate with people often creeps into our interaction with machines.

This problem of mistaking computers for humans is compounded when interacting with artificial agents via conversational UIs. Take for example a study we conducted with two companies who used AI assistants that provided answers to routine business queries. One used an anthropomorphized AI that was human-like. The other wasn’t.

Workers at the company who used the anthropomorphic agent routinely got mad at the agent when the agent did not return useful answers. They routinely said things like, “He sucks!” or “I would expect him to do better” when referring to the results given by the machine. Most importantly, their strategies to improve relations with the machine mirrored strategies they would use with other people in the office. They would ask their question more politely, they would rephrase into different words, or they would try to strategically time their questions for when they thought the agent would be, in one person’s terms, “not so busy.” None of these strategies was particularly successful.

In contrast, workers at the other company reported much greater satisfaction with their experience. They typed in search terms as though it were a computer and spelled things out in great detail to make sure that an AI, who could not “read between the lines” and pick up on nuance, would heed their preferences. The second group routinely remarked at how surprised they were when their queries were returned with useful or even surprising information and they chalked up any problems that arose to typical bugs with a computer.

For the foreseeable future, the data are clear: treating technologies — no matter how human-like or intelligent they appear — like technologies is key to success when interacting with machines. A big part of the problem is they set the expectations for users that they will respond in human-like ways, and they make us assume that they can infer our intentions, when they can do neither. Interacting successfully with a conversational UI requires a digital mindset that understands we are still some ways away from effective human-like interaction with the technology. Recognizing that an AI agent cannot accurately infer your intentions means that it’s important to spell out each step of the process and be clear about what you want to accomplish.

A US college is shutting down for good following a ransomware attack

Lincoln College says it will close this week in the wake of a ransomware attack that took months to resolve. While the impact of COVID-19 severely impacted activities such as recruitment and fundraising, the cyberattack seems to have been the tipping point for the Illinois institution.

The college has informed the Illinois Department of Higher Education and Higher Learning Commission that it will permanently close as of May 13th. As NBC News notes, it’s the first US college or university to shut down in part because of a ransomware attack.

Lincoln says it had “record-breaking student enrollment” in fall 2019. However, the pandemic caused a sizable fall in enrollment with some students opting to defer college or take a leave of absence. The college — one of only a few rural schools to qualify as a predominantly Black institution under the Department of Education — said those affected its financial standing.

Last December, Lincoln was hit by a cyberattack, which “thwarted admissions activities and hindered access to all institutional data, creating an unclear picture of fall 2022 enrollment. All systems required for recruitment, retention and fundraising efforts were inoperable,” the college said in a statement posted on its homepage. “Fortunately, no personal identifying information was exposed. Once fully restored in March 2022, the projections displayed significant enrollment shortfalls, requiring a transformational donation or partnership to sustain Lincoln College beyond the current semester.”

Barring a last-minute respite, the one-two punch of the pandemic and a cyberattack have brought an end to a 157-year-old institution. Lincoln says it will help students who aren’t graduating this semester transfer to another college.

Over the last few years, ransomware hackers have attacked other educational facilities, as well as hospitals, game studios, Sinclair Broadcast Group and many other companies and institutions.