AI
Misdiagnosis, costly drug-trial failure, unnecessary diseases resulting from bad habits, exorbitant record management overheads and endless waiting. Medical systems reliant upon human administration alone are slow and prone to error, the crushing volume and complexity of unmanageable patient need keeps systems in constant crisis. Either the volume or the complexity need to go.
Should we attack the complexity of the world head-on, by boosting the processing power available to our species?
Machine Learning refers to algorithms that parse data, learn from it and use it to make intelligent decisions. New synthetic drugs, unseen health trends, neural signals, sophisticated dietary advice and more accurate diagnostics: all of these are set to be unearthed by deeper data parsing than a human mind would ever be capable of, accelerating multiple fields of Longevity technological development.
Yet every successful ascent is a conquering of gravity and drag and we must consider the field’s main source of friction: privacy. If Machine Learning is a rocket, huge amounts of data are the fuel, and the extraction and monopolizing of this private data is a principal focus of the AI healthcare industry. The uproar surrounding Facebook, a leader in the wider data-collecting community, states it plainly: the industry has already trespassed into areas so foundational to modern conceptions of liberty that governments and civic resistance could limit its development altogether.
Personal genomics company 23andMe, for example, has been challenged after reports of a 2018 deal with GSK that provides the pharma giant with access to its database of 5 million people came to light. In a similar vein, the Canadian Border Services was found to be collecting the DNA of migrants considered for deportation by using ancestry websites, sparking outrage. So revolutionary tools abound, yet the principles of their ethical implementation are far from established, clouding the investment landscape but also providing first-mover advantages.