by Hazel Anna Rogers for the Carl Kruse Blog
Artificial intelligence, more commonly known as AI, is a branch of computer science that focuses on the creation of intelligent task-oriented machines — tasks that might usually require some degree of human intelligence. This is a general definition, and one which perhaps lessens the fear-riddled narrative that inevitably pursues such technological innovations. We hear the word ‘intelligent’ and ‘machine’, and we are immediately thrown into several philosophical dilemmas: what does ‘intelligent’ mean? Will these machines eventually become more ‘intelligent’ than us? What would happen if this were to occur?
Artificial Intelligence (AI) is probably one of the most revolutionary developments in human history, and the world is already witnessing its transformative capabilities. Like most great inventions by humans (e.g., nuclear energy), it has two sides, a side that could be used maliciously and a positive one that powers cutting-edge solutions to our problems and will help our technological civilization.
Bill Diamond, CEO of the SETI Institute, sets up this discussion by questioning whether AI is indeed friend or foe; could AI be a panacea for some of humanity’s most difficult challenges? Or is AI inherently a job-stealing, freedom-ending evil destined to hold us hostage at some point in the future? Upon hearing Bill articulate the latter comment, a prolific belief among a less-knowledgeable public on the topic of AI, we see some smile from the participants of the call, which are: Siddha Ganju and Alex Lavin, both engineers, inventors, and entrepreneurs, and who have both contributed to the Frontier Development Lab (FDL). FDL is a public-private partnership between NASA, the SETI Institute, Trillium Technologies and leaders in commercial AI, space exploration and Earth Science dedicated to finding solutions to complex problems linked with space exploration using AI. The moderator of the discussion is James Parr, Founder and CEO of Trillium Technologies, an FDL partner. And the chat, which can be viewed in its entirety here, is sponsored by the Carl Kruse Blog.
The conversation which follows explains, through practical and theoretical examples, how networked AI can amplify human effectiveness by performing tasks such as complex decision-making, reasoning and learning, sophisticated analytics and pattern recognition, visual acuity, speech recognition and language translation. Along the way, the AI specialists joining the discussion share some trials and tribulations that can and have occurred within the different fields of AI and they suggest pathways to mitigate the potential negative impacts of this innovative and already omnipresent technology.
Siddha, an architect at Nvidia focusing on the self-driving initiative, commences the discussion by elaborating on the background of her video call screen. The image framing her head is a project started in 2017 dubbed ‘the camera for all sky meteor surveillance’, which involves cameras all around the world looking up at the night sky to monitor meteor and asteroid activity. Recently, on January 18th this year, the project managed to detect Asteroid 1994 PC1 to be as close as a million miles from earth (no reason for alarm – the asteroid safely passed us on that day also!).
James follows on from Siddha’s comments to ask her whether she ever experienced any moments of revelation during her time working in AI, to which Siddha responds by discussing her bachelor’s degree, which she completed in 2011. At this point, she states, AI machine learning was having its resurgence. However, it had previously been used mainly for audio recognition, and suddenly this machine was beginning to recognise cats, and – she thought – why would it need to do that? Then, when on holiday in Delhi one summer with her grandparents, Siddha thought on the possibilities of AI. Her grandparents were hard of hearing, and Siddha realised that AI could potentially be used to create something that could help her grandparents be more self-sufficient, something related to the AI voice recognition (and the whole cat thing), something that her grandparents could eventually use. This conundrum became her project during her master’s degree.
Alex Lavin, who currently works at Pasteur Labs and ISI (Institute for Simulation Intelligence) also responds to James’ question. He suggests that his own ‘moments of revelation’ happened a few times. One of these times was during a summer internship at NASA Ames (NASA’s Ames Research Center, one of ten NASA field centres, which focuses on development in aeronautics, exploration technology and science aligned with the centre’s core capabilities) in the Bay Area before grad school where he met many computer vision experts. Furthermore, Alex completed a grad school lunar rover project with Google Lunar XPRIZE where he developed some mobile path planning algorithms for the rover from the lander to the goal site, including how to intelligently balance sunlight, temperature, hazard, and risks from different types of terrain. James follows up Alex’s discourse by asking about early simulation intelligence, a field which Alex is deeply involved in.
Alex describes early simulation intelligence as a concept, field, and, hopefully, a paradigm if all goes well for the next few years or decade. It is, in basic terms, the merger or integration of artificial intelligence simulation and scientific computing in specific ways where they can really synergize. This is not, Alex states, at all like a simulation environment to train reinforcement learning algorithms which one might see at Deep Mind or Open AI and so on. Early Simulation Intelligence is trying to allow AI to run experiments or test hypotheses or even just to accelerate existing and new models in simulation as testbeds which represent real-world systems and processes. ESI has been being applied in all scales, from particle physics and nuclear engineering to socioeconomics, biology, climate science, space weather, astrophysics, epidemiology, and everything in between.
James proceeds to ask about AI ethics, and, particularly, whether the two guests consider AI to be more of a moral or philosophical discussion, or perhaps even an engineering problem. Alex is quick to propound that none of these problems are mutually exclusive, and perhaps that the issue lies more in operation than anything else. In Alex’s experience, companies and start-ups need to have a very pristine plan and map which includes checkpoints and an independent review board for ethics as well as an ethics council. He also articulates that ethics even broader than AI should be discussed, shedding light on such dilemmas as data privacy. Alex also highlights that all of these issues come down to business ethics; are these companies ethical or moral in their decision-making?
Siddha responds to James’ question from an engineering perspective. She notes that ethics, at the end of the day, is just ‘ethics’; it isn’t a new concept that only came along with the emergence of AI. It has been there since humans started developing a consciousness. When looking at the development of technology, Siddha continues, broadly there is one stage called research and development, where one figures out what the technology is and how it might work, then the second is usually the deployment and figuring out how one might want to USE the technology. So, with regard to responsibility and ethics, as an engineer and developer, Siddha suggests that she has to consider whether she is asking the right questions, for example how the dataset she is using has been collected, and whether she is thinking of the right biasing, or of the right mitigation strategies to reduce the impact of bias? Siddha reassures us that there is a huge collective consciousness within the AI community, one of the biggest examples of which is the AI ID (Incident Database), which is a repository of documented failures that have happened in the real world with real engineers. AI ID works as a coalition that tries to make sure that humans are using AI responsibly.
Siddha suggests that people shouldn’t treat AI as a saviour which has come to save us, or any technology for that matter, and that one shouldn’t just wait about for innovations to happen. She articulates that we have to own up to our own responsibilities and come up with new solutions to problems that we have. Also, when it comes to deploying new technology, it takes a lot of time to develop and additionally to make sure that it works and is effective in the real world. Every day humans encounter new failures, and every day they need to recalibrate their plans so as to be more agile and thus better suited to deal with those failures. But, she notes, these are then rules for all new technologies.
Continuing on, James asks the guests whether they think that AI is going to change us. Siddha notes that all technology changes lifestyle, and that this is exactly why technology is built. For example, remotes were developed for televisions, and cars were developed because humans didn’t want to go around with horses every day; AI, Siddha suggests, is similar. It is a technology that is developed to augment our lives. And, Siddha laughs, at the end of the day, who doesn’t love a Roomba? (iRobot hoover). Siddha believes that AI is changing perspectives and making us more creative. An example of this is the Microsoft App ‘Seeing AI’ which is helping people in the blind community read the poems that their children have written for them and helping them to function normally in everyday society. Siddha comments that AI can nurture and support many different communities, and further that these people could miss out if we solely focused on creating AI at scale, rather than at the community level.
The next topic for conversation, in alignment with the interests and core values of SETI itself, is space, and what the role for AI is in space innovation. James goes so far as to mention the belief that what humans may one day meet in space will be some form of AI.
From one perspective, Alex states, any advanced civilisation that has technology capable of travelling or communicating outside of its immediate neighborhood is also intelligent enough to be strategic and thus to not do so, because anything that is able to communicate with them also would probably be able to wipe them out.
The discussion is followed by some intriguing questions from the SETI community:
Q: If someone is studying computer science, how can they get better at their trade and train quickly in the realm of this new technology?
A: (Alex) I recommend the textbook ‘Practical Deep Learning’ – that for me was very motivational and inspirational. I would say, though, that it’s good to take a holistic view of all the material you have on offer everywhere, and then just keep exploring and try to define certain challenges or problems so you can build up your confidence to try and approach those while focusing on building your own trajectory in the field.
(Siddha) I agree. What you want to be doing is going through the examples to expand your knowledge. There isn’t really any theory it’s all about the question ‘how would I build my own search engine?’ and hypothesising on that question through different time scales and situations.
Q: There are speculations about the ultimate limitations of AI. Some people talk about an AI winter…is that justified?
A: (Siddha) Good question. We don’t know what we don’t know, so we’ve just got to exhaust everything that comes along and explore what our limitations are as we come to them. Right now, a big challenge in AI is working out how to transfer information from one model to another. So, how can I do inter-interpretability between models…that’s a really interesting challenge that will help us get to the next level of AI.
(Alex) I’ll take the philosophical stance. When you look back in science and history, etc…when the prominent people in a particular field say ‘oh there’s nothing else to discover or invent’, like some of the primary thinkers of alchemy…there is always something more. The real research in the field is that we study our own intelligence and that we have some perspectives on the bounds of our own intelligence…and the things we’re trying to build have larger bounds or even no bounds, and that is why so many of us do this in the first place.
Q: Do you think a quantum computer might be able to communicate with higher intelligence, perhaps even aliens? Do you think it’s a useful tool for communication with ET?
A: (Alex) No (laughs). There’s no way of answering that objectively. It’s not even wrong, it’s like this question or suggestion that (this is not hate on the query or whoever sent it at all, as it is definitely something that will be relevant, eventually) that you can’t even try to refute it because we just don’t have enough data points or perspective to argue the case for or against.
(Siddha) I think we need to fund SETI to build a quantum computer then figure out yes or no (laughs).
(James) That’s a very good answer Siddha (laughs).
Q: SETI is also very involved in planetary exploration, so here’s another question: How can you imagine AI supporting (SETI) in the future?
A: (Alex) I’ll give one practical answer. Compression and data efficiency in satellites: being able to process that data on board would save so many resources and money because we’d need to transfer far less data and thus could forge ahead with other missions that were previously above budget.
Q: Could you see AI being used for mission planning and operations?
A: (Siddha) Yes. I think as part of the frontier development lab we’ve had a series of projects which have broached this subject. Say we put a rover on the moon, and we want to look for some metal resources, or some specific resource, how do we know where to go? An example of that on the earth would be…I’m sitting at home, I open google maps, and I look for a market where I can buy XYZ. But we can’t just sit around the moon and wait for ages to try and find what we want. Could we produce a technology that could inform the rover where to go, how to get there quickly, and what hurdles or difficulties they might have on the way? FDL, over the past few years, has created a technique which develops high resolution images form the existing satellite images of the moon, and runs a route/path planning algorithm on top of that which informs the rover which way to go, and so things like this are developing but we still have to test deployment and things like that. But this is the first step, and we are slowly getting there!
Q: Diversity in AI: How does one use or embrace diversity for the benefit of AI technology, for example for customers?
A: (Siddha) The only way that we can embrace diversity in technology and ensure that the technology is benefiting marginalised communities is to ensure that those communities are a part of the development of those technologies. Google noted with the selfie camera that around 10 percent of selfies were reversed, and then they realised that the 10 percent were left-handed people, so if we can have a more diverse team, we can develop more diverse solutions because early on we will know what the failure and can recalibrate our course. We’ve been doing that in FDL. We have an incredibly diverse team from several countries, and we have high school students informing our innovations too.
(Alex) I want to echo the phenomenal progress in leadership that FDL does in terms of diversity and inclusion; our teams are just a whole magnitude greater than any other organization or team, which is super impressive. In my previous start up we were developing algorithms for early predictions of Alzheimer’s and other neurodegenerative processes, and there was s significant challenge overcoming dataset biases, where, in general, females have, for some reason, a higher likelihood of early onset dementia and frontal temporal dementia. The sparsity with which you would find female representation in a lot of clinical data sets was a huge problem, meaning that it was really difficult to discover this let alone use it for modelling insight. This carried on for decades. We need to be more diverse or inclusive, not just in the people who are building and doing the research, but in the whole population, involving them in those technologies.
Q: Alex, you write for Forbes: What’s the headline in ten years’ time? The AI related headline you’d like on the front cover of Forbes?
A: (Alex) Okay…my mind goes to nature or science…right well I’d just like the see things like AlphaFold (an AI system developed by DeepMind that predicts a protein’s 3D structure from its amino acid sequence) on the cover. Forbes headlines, I think, will be riddled with a lot of the same news and innovations around autonomous vehicles and autonomy, because I think there will be significant advances, including the same challenges and unperceived new challenges that come up as we broach autonomy in broader society.
(Siddha) Difficult question. I think if we could really develop AI applications to help detect or monitor signs of, let’s say, depression, that would be really impactful. There are always hidden cues that exist in cases of depression, but which aren’t readily seen…that could have a huge impact and would be really cool.
JAMES: That was a thrilling discussion. Thank you so much.
BILL: Thank you for this enlightening conversation. From my perspective, the potential for AI to do good and impactful things is as boundless as the human imagination, so, it’s like any innovation, the good and the bad that they bring, those are the downsides.
We reached people from 30 different countries and over 20 different states today! Once again, this worldwide community showed us that our curiosity brings us together regardless of gender or race or identity or language.
==========================
The Carl Kruse Blog homepage is at https://carlkruse.org
Contact: carl AT carlkruse DOT com
Other articles for the Carl Kruse Blog by Hazel Anna Rogers include her summary of another SETI TALKS on UAPs and Electronic Music – A Not So Brief History.
Also find Carl Kruse at his SETI profile and the Carl Kruse Goodreads Profile.