Correspo Home

 

 

 

 

 

 

 

 

Correspo Home

 

 

 

 

 

 

 

 

Correspo Home

 

 

 

 

 

 

 

 

Correspo Home

 

 

 

 

 

 

 

Correspo Home

 

 

 

 

 

 

 

 

Correspo Home

 

 

 

 

 

 

 

 

Correspo Home

 

 

 

 

 

 

 

 

Correspo Home

 

 

 

 

 

 

 

 

 

Correspo Home

 

 

 

 

 

 

 

 

Correspo Home

 

 

 

 

 

 

 

 

Correspo Home

 

 

 

 

 

 

 

 

Correspo Home

 

 

 

 

 

 

 

 

Correspo Home

 

 

 

 

 

 

Correspo Home

 

 

 

 

 

 

 

 

Correspo Home

 

 

 

 

 

 

 

 

Correspo Home

 

 

 

 

 

 

 

 

Correspo Home

 

 

 

 

 

 

 

 

Correspo Home

 

 

 

 

 

 

 

Correspo Home

 

 

 

 

 

 

 

Correspo Home

 

 

 

 

 

 

 

 

 

Correspo Home

 

 

 

 

 

 

 

 

 

 

 

 

 

Correspo Home

 

 

 

 

 

 

The ABQ Correspondent                                                   

Last Two Issues

September 2020

FOLLOW ME

The Correspo has more than once (and quite recently) discussed the Hear-a-Light device developed in the pre-integrated circuit days of the 1960s when discrete transistors were still pretty new. About the size of a large marking pen, the Hear-a-Light interpreted changing light levels to a tone of varying pitch, so that blind people could hear variations in the environment around them…which is a whole lot more useful information than you’d think. The device was astonishingly small for its time. A lot has happened in the last sixty-some years, and smart people are coming up with vastly improved systems along these lines, incorporating novel insights. A student at Loughborough University in Leicestershire has been developing a hand-held guide dog, for example. One of the key insights is that a guide dog doesn’t typically say “stop” or “turn right here.” The dog moves so that its harness grip moves the walker’s hand one way or another, indicating where to go, leading the person holding the leash, even indicating the level of urgency. The hair-dryer-sized device (named Theia by its developer) can do the same thing, using a Control Moment Gyroscope of the sort that lets space satellites maintain their orientation. Theia pushes and pulls the holder’s hand the right way.

Theia uses lidar to see what’s around it, GPS to know where it is, and listens with machine intelligence for instructions, or, one supposes, questions from its user. Theia is a smart system, becoming steadily more helpful. This isn’t a commercial product, just an experimental university project, but it’s well started, intriguing and full of possibilities for doing good.

 

MAYBE A BIG DEAL

An old rule-of-thumb says that if you can change the throughput of a system by three orders of magnitudemultiplying it by a thousand times, so that you get the same end product at a thousandth of the cost you are accustomed to, maybe in a thousandth of the time, or with a thousandth of the people required, or in a thousandth of the space you previously needed…the new capability will change whatever you are trying to accomplish overall. You don’t just do the same thing faster and cheaper; you tend to change your whole approach, dealing with different problems that seem bigger now that this one is solved.  

See what computers have wrought in a zillion applications. Minor example: banks used to keep “banker’s hours;” you couldn’t get in to do your business before 10:00am or after 3:00pm. Most of the bankers weren’t lazy and spoiled, lording it over us peons. They could barely complete the day’s bookkeeping in those three or four hours of the normal business day when they kept the doors closed. You may have noticed that computers have changed the way banks interact with their customers, producing a substantial effect on how society operates. 

There’s a possibly big thing afoot: A pleasingly straightforward report on a website called Independent tells of work by a team at George Washington University to speed up machine learning.  The report includes this line: “…their photon-based TPU was able to perform between 2-3 orders of magnitude higher than an electric TPU.”

If you don’t know what a TPU is…I didn’t, and it really doesn’t matter here…you may consult an explanation for beginners that will require only a couple of intense weeks of study to get some idea of what TPUs are about. Or, just consider the subheadline of the article: “Performance of photon-based neural network processor is 100-times higher than electrical processor.” That means the smart machines can learn a hundred times faster without using so much power that the lights dim in the neighborhood. Presumably, the next generation of these devices will in fact speed the work by three orders of magnitude, enabling new approaches to smarts that we haven’t yet had an opportunity to experiment with, indeed, that we haven’t yet thought of .

They say “higher than an electric TPU” because these things transmit data with light that can reach any point in sight, not just running along a conductor that leads from one place to another. For decades associates have been telling me that light can enable fast, parallel manipulation of data…perhaps on the order of neural nets in the human brain or more…and light systems offer huge opportunity. When I raised this report with one of them, he said with some excitement “maybe they’ve finally figured out how to do it.” This seems worth tracking.

------------------------------------------------------------------------------------------------------------

NELS MUSES 
 

Item:

A correction: the remark last month that my first view of videoconferencing was at the 1983 IPRC was wrong. Really, the first was at the 1968 Fall Joint Computer Conference in San Francisco, when Doug Engelbart’s team staged The Mother Of All Demos, which included “videoconferencing,” though nobody called it that. Didn’t really see much of it, because I was preparing for the talk I gave next on that same platform and running the Computer Science Theater for the conference. That presentation revealed the computer mouse to the world. It’s startling to realize that the $20 mouse I’m using to set up the links in this piece at the moment arose from what the fellows were first showing there. See Englebart and Bill English being interviewed by John Markoff. Those guys done good.

 

Item:

A company called Hello Robot is producing what seems to be a robot potentially useful in everyday settings. No cutesy faces, but lots of capability. There are several videos on the site and a bunch of good stills. Fun to see the dog playing with the robot. 

 

Item:

Just some interesting recent comments from a friend: “Building on landfill along the San Francisco Bay has never been a wise idea and all of that residential real estate out on the water around Brisbane, Burlingame, San Mateo, etc. is liable to end up under it when a chunk of the San Andreas gives way closer than Santa Cruz, the epicenter of the Loma Prieta quake I was part of first hand as the Oakland Bay Bridge gave way in my rearview mirror… In my landscape design days …it was not uncommon to hit saltwater muck digging holes for posts at high tide. Pool companies had to watch tide charts also as the uncured pool materials used to buckle and pop up if not sprayed on and finished at low tide….Sadly you could count on trees dying after being exposed to the salt water as long as they could stand it…when a friend was dating his wife, they had to work around a tide chart because she had trouble opening the door or windows in Foster City at high tide.”

 

ITEM FROM THE PAST

 

This item from 1985 is brought to mind by the

much publicized (easy to publicize because they

are so interesting) activities of Boston Dynamics

Spot robot, which has finally gone on sale to

anybody who is up for the stiff $75k price.

SURPRISES IN ROBOTRY

Client Rio Grande Robotics (in Las Cruces, New Mexico, of all places) is doing a market survey of the personal robot business by selling robots. They publish a catalog of robots and robot stuff...getting thousands of inquiries, making a wholesome number of sales. Bill Morrissey, Technical Director, reports a surprise: "There is almost no overlap between computer hobbyists and the people who buy robots from us." These customers tolerate computers as tools necessary for dealing with robots, but the computer is just overhead. They are interested in something else. And the computer buffs already have a hobby. This is not what we all expected. Hm.

In the early-mid ‘80s, when companies like

R.B. Robot, Androbot, and Heath (as in Heathkit)

were offering “robots” that individuals could afford,

the whole industry had an identity crisis that has

lasted until Spot was offered at retail. People

expected robots to be like low-level R2-D2s and

C-3POs, clumsier and more limited, but with a

spark of self-consciousness and empathy that made

the characters interesting and endearing. Basically,

the retail robots of the mid-80s were expected to be

for a few hundred dollars what Spot almost is now,

in 2020. Look at those days here, and here…and

have some fun with Cybernetic Zoo. 

We were insanely optimistic. (I just as silly as

anybody in my published expectations of how soon

“real” robots would be among us.) We’re not there

yet; don’t hold your breath.

Part of the optimism arose from the by-then-already-

impressive surge of personal computers into many

aspects of society. “Personal robots” were thought to

be like personal computers, escaping institutions like

banks and governments at last, falling into the hands

of ordinary people. Well, no. The institutions did

indeed have computers with which to optimize their

bookkeeping at the expense of regimenting the general

populace infuriatingly. (Remember getting bills on

data cards saying “Do not fold, spindle or mutilate this

card”? Many were inspired by that instruction, which

didn’t even say “please,” to fold, spindle, mutilate,

crumple, and smear the cards at every opportunity.)

The institutions did not have robots of the sort we

imagine; they had no technology that could leak out

to ordinary people, giving them individual power and

satisfaction. They still don’t, though the machines

under computer control are increasingly dexterous,

precise, mobile, gentle, and smart.

Machine intelligence, with the ability to learn from

experience is beginning to creep into the systems…

and this is why the reference to Rio Grande Robotics,

way back when, is relevant.

In the early ‘80s, Joe Bosworth1 of R.B. Robot

contacted me to talk about a book about neural nets

and machine intelligence that Iben Browning and I

had published a few years earlier.

At the time the remarkable technical team at the

company I was with was developing neural-net-based

Adaptive Pattern Recognition Processing, which

generalized pattern recognition. (A string of bits is a

string of bits, whether it’s from images, audio,

instrument readings or whatever else you have.) The

company had built an operating system in which

APRP (in just 4k bytes) enabled machine intelligence

capabilities in a plain-language programming system2

running on the Apple II and the IBM PC.

Unlike Apple and IBM, Joe saw opportunity in what

we had, and R.B. Robot contracted with us to develop

a robot control language that would give almost

anybody the ability to program an RB5X robot that

could interact with its environment.

Nikki Delgado at Rio Grande Robotics worked with

kids in Las Cruces to program RB5X robots to perform

using RCL as the language. She commented that the

activity that stirred the most interest was teaching

the robot “break-dancing.”

These were mostly non-computer-literate Hispanic

kids in pretty much backward New Mexico.3

…using machine intelligence to do something that

seemed worthwhile to them. They caught on quickly.

Apple and Microsoft kept shrugging us off. (“Yes,

yes, we have experts working on that.”) It was a

couple of years before anybody with muscle caught

on, and the technology slipped out of sight.

1Joe became a good friend. 

#Even I, who had floundered miserably with FORTRAN, ALGOL,

and BASIC could program a bit in that language…and I wrote the

manual. One of our directors sensibly asked why, if the language was

so simple, we needed a 300 page manual. The answer was that

we could use no shorthand computer jargon with our audience of

people who weren’t interested in computers, but just wanted to

accomplish something; we had to spell everything out in plain

language. That takes a lot of words. 

3One feature of the language was that you could create synonyms to

the commands…using Spanish instead of English, for example…or

Russian, German, Japanese, Arabic, Esperanto or all of them if they

were more convenient. 

-------------------------------------------------------------------------------------------------

 

August 2020

CLOSE COMMUNICATION

Natural neural nets in living critters have features that were startling to this ignorant outsider who first heard about them back in the ‘60s.* For example, while it seems plausible that an electrical signal moves along the axon of a single nerve cell for delivery of an input to other cells in a network, it was surprising to learn that the axon does not actually “touch” the next cell, allowing it to zap it with a tiny jolt of electricity. The signal is delivered to that next cell in the form of a chemical, not electricity. At the synapse, the transfer point at the end of the axon, the electrical signal apparently causes formation of a chemical that travels across a gap to the next cell, where it causes a reaction that gives that next cell an input of data. Who’d have thought? It seems that some of that ‘transfer chemical” accumulates in the gap, dissipating only gradually. The more often a signal is passed at that synapse, the greater is the residual accumulation of the stuff, making it more likely that some will get across the gap the next time a signal comes through…creating a “preferred path” in the network. It’s easier to trigger the second…third…etc…time than the first. Now a three-university team is working on technology that can interact directly with the activity at the synapses. A quote from from the New Atlas report on the work: “…biohybrid synapses…let living cells communicate with electronic systems, not with electrical signals but with neurotransmitters like dopamine.”

Mind you, this just hints at the complexity of the natural neural net. The axon of a neuron typically splits into multiple branches, each carrying that electrical signal to its own synapse associated with another cell….and not just a few branches, but as many as three thousand. They reach out to other neurons more or less randomly; that is, probably reaching into preferred regions of cells in the system, but not aiming for any place or cell in particular. There are zillions of cells to choose from. Each branch grows out until it hits something and stops there. Not only does each neuron connect to hundreds of others, but hundreds of others connect to it in a semi-organized, staggeringly complex system that learns from experience how to survive and do something helpful for the critter of which it is a part.

One guesses that the university team has some way to go in integrating its hardware into the semi-organized system that learns how to be useful. The idea of interacting directly with such a system right down at the synapse level is remarkable.

*It’s likely that some of the information acquired back then has been refuted or modified since, but the basics seem to be pretty much accepted still. It’s hard for a persistently ignorant outsider to know.

 

ROW, ROW, ROW YOUR BOAT*

Some decades since, our little research group was prodded (by a desperate need for funds) into undertaking a project to develop a small device that would enter a person’s large colon, and travel its length while transmitting images from a small camera to a skilled observer who would detect things of medical interest in the images. That project was challenging in the pre-integrated-circuit era before today’s technical tools were available…but there was reason for hope. After all, we were about to send people to the moon, and this proposed journey was just a few feet. Indeed, we made some progress before the funding faltered. Our reports are in the literature somewhere, perhaps even useful.

One step along our way is memorable. It seemed important that we should know something about how such examinations were currently performed, and arrangements were made for associate Paul Honoré to attend one with a prominent proctologist who could explain all the considerations as the examination proceeded. Paul was big on designing electronics systems, constructing linear accelerators and the like, and he was hesitantly game to learn what we needed to know first-hand. He did indeed gather a great deal of useful information, reporting but a single glitch in the proceedings. Because the process involved a certain amount of X-ray activity, he was handed a lead apron just before going into the examination room. Unfamiliar with the garment, he had some problem putting it on. “I managed to get my head through the armhole” he said, “and I was struggling ineffectively before somebody grabbed me, and sorted things out. I realized during that embarrassing process that the patient was actually present on a gurney, wondering in horror why this lunatic was to be assisting in his examination. I feel bad about that.”

A current (2020) report in Science Robotics indicates dramatic progress in projects along the same lines; in this case a team has developed devices that can swim (well, roll, they say) upstream in a blood vessel under very tight control. These things are between about 3 and 8 microns in diameter. (The thinner sort of human hair is about 17 microns in width.) The devices can be steered to particular targets, e.g. cancer cells, and they can move about 600 microns a second. It would take about 1.7 seconds to travel a millimeter, about 42 seconds to travel an inch. That’s impressive, but not exactly zipping along recklessly; it sounds comfortingly conservative. The device can carry a payload, perhaps something to discourage cancer cells. This is still very early going in development; your physician can’t prescribe it for treatment next week, but it’s impressive to some of us who were trying to do something far simpler in what was, comparatively, the technical Stone Age.

*Sorry, but it’s not possible to think of this without recalling the Pogo/Walt Kelly version:

Ro Ro Rover both

Jetly don extreme

Marilyn, Marilyn, Marilyn, Marilyn

Wife fudge jester theme

…which sounds great sung as a round with friends and beer.

------------------------------------------------------------------------------------------------------------

NELS MUSES 
 

Item:

Referring again to last month’s discussion of the desirability of cutting the grass and stuff  growing around a house to reduce the number of snakes and increase their visibility, Dar Scott says: “I don't know about Banded Kraits, but I once was attacked by a deadly banded crate and had to seek medical help.”

 

Item:

The Correspo has commented a number of times on 3D printing and on synthetic meat. A company called Redefine Meats has combined the technologies, and is producing a 3D printer that replicates “…not just the look of steak, but the taste, and texture as well.” The all-vegetable product even bleeds as a piece of beef would…and if the steaks shown in the video are really the product of the printer, they look just right. Haven’t seen a review of what they’re like to eat, but one supposes that a convincingly steak-like product is at the very top of the developer’s list of goals. Indeed, with close control of the ingredients and every step in the process, it should be possible to give customers steaks with precisely the characteristics they want. Indeed, there’s no obvious reason steak should be the only product. Want chicken, ostrich, mahi mahi, salmon, ham, lamb, octopus or chimera? Want bones? Could be. Part of the appeal is that the amount of land and water necessary to make the product is said to be less than ten percent of what’s required to grow cattle. The developers say the cost of the current steak product is in the “high-end steakhouse” range but will clearly drop with large volume production. Redefine has expected to sell its printers to restaurants, but things being…um, how they are…one can imagine steaks being printed to order in the local supermarket, perhaps operating next to the robot breadmaker we discussed a while back.

 

Item:

It’s been a while since the Correspo pointed to the wonderful robotics handiwork of the German firm Festo. Here are some of their birds, butterflies, and other things flying. We haven’t quite figured out how Festo earns money to let them play with these things, but are certainly impressed by their foresight in creating a lobby in their building big enough for these splendid demonstrations.

 

ITEM FROM THE PAST

 

Well, as sometimes happens, this isn’t really an

“item” from the past, but a recollection that

should have been an item some years ago. It’s

brought to mind by a recent online mention of

the 1968 incident in which

a Japan Airlines DC8 landed in San Francisco Bay

a couple of miles short of the SFO runway for

which it was headed.

The water was just deep enough to cushion the

landing and just shallow enough so the front

exit of the plane was above the surface. Nobody

was injured or wet or even apparently much upset.

The plane was pulled out, refurbished, and put

back into service. Altogether quite amazing.

The report says the DC8 was removed from the

bay within fifty-five hours of the landing. Not

contesting the report, but it surprises me, because

I saw the thing in the water more than once,

driving by on Bayshore Freeway, and I’d have

sworn it was there for a couple of weeks. The sight

was both memorable and mundane…just a great

big airplane sitting in the bay, sort of closer than

usual to the surface.

The report also places the site “near Coyote Point.”

Again, I’d have said it was closer to SFO, but the

mention of Coyote Point warms my heart, because

I went to school there. It was one of three campuses

of what was then known as San Mateo Junior College.

The place was long an island with interesting

history before the bay was filled in, converting it

to a peninsula. During WWII a branch of the U.S.

Merchant Marine Academy (the King’s Point of

the West), was located there and after the war

San Mateo County took over the barracks,

classrooms, gym, etc… for the J.C.

The buildings had been constructed hastily, and

by 1952 were growing decrepit. All buildings

were heated with steam from a boiler down on

the wharf. The buried pipes began leaking early

on, emitting clouds of steam at points around

the campus. On a cool winter morning it was like

going to school in Yellowstone Park, with fumaroles

sending warm columns of white mist up into the

tall, fragrant eucalyptus trees. 

Those trees had long proved an obstacle to

aviation. During WWII military planes also

heading to landings at San Francisco airport

reportedly clipped the eucalyptus more than once. 

To round out this nostalgia, another image from

the era is very clear. I sometimes lunched while

sitting atop the cliffs that dropped maybe 30 feet

down to the bay at the edge of the campus. At

times small, fast birds gathered there. They’d

dive dramatically from the very tops of the trees

to touch the waves, then shoot all the way back

to the treetops. Not bad.

The pilot who made the awkward landing of that

Japan Airlines flight also returned to service,

chagrined, chastened, and retrained, but game.

He brought ‘em all out safe.

   ----------------------------------------------------------------------------

 Copyright © 2020 by ABQ Communications Corporation    All Rights Reserved