
5 minute read
A BRIEF CONVERSATION ON ARTIFICIAL INTELLIGENCE
Hello Readers,
This incredible scientist, Rosalind Franklin, is not exactly a household name. Why not? One medical historian describes her as the victim of “one of the most egregious ripoffs in the history of science.”

It all started in January 1951 when Franklin was using x-ray diffraction techniques to photograph DNA fibers. One of her x-ray pictures of DNA, known now as Photograph 51, became famous as critical evidence in identifying the structure of DNA. The image was achieved through 100 hours of X-ray exposure from a machine Franklin herself had refined.
Significantly, Franklin had a personality conflict with colleague Maurice Wilkins, one that would end up costing her greatly. In January 1953, Wilkins changed the course of DNA and scientific history by disclosing without Franklin’s permission or knowledge her Photo 51 to competing scientist James Watson, who was working on his own DNA model with Francis Crick at Cambridge.
Upon seeing the photograph, Watson said, “My jaw fell open and my pulse began to race,” according to author Brenda Maddox, who in 2002 wrote a book about Franklin titled Rosalind Franklin: The Dark Lady of DNA
The two scientists used what they saw in Photo 51 as the basis for their famous model of DNA, which they published on March 7, 1953, and for which they received a Nobel Prize in 1962. Crick and Watson took most of the credit for the finding. When publishing their model in Nature magazine in April 1953, they included a footnote acknowledging that they were “stimulated by a general knowledge” of Franklin’s and Wilkins’ unpublished contribution, when in fact, much of their work was rooted in Franklin’s photo and findings. Both Wilkins’ and Franklin’s separate articles were published second and third in the same issue of Nature. Still, it appeared that their articles were merely supporting Crick and Watson’s. Franklin was not one to make waves or complain. Contemporaries say that was bred into her from infancy by her parents. She was known to have a very sharp tongue at times and could nurse both grudges and friendships indefinitely. She might have fought this outright theft with greater intensity had not cancer interfered. She started noticing symptoms of something amiss in 1956, and her doctor found two large tumors in her abdomen. She died in April of 1958.
John Desmond Bernal, a pioneer in X-ray crystallography, spoke highly of Franklin around the time of her death in 1958. “As a scientist Miss Franklin was distinguished by extreme clarity and perfection in everything she undertook,” he said. “Her photographs were among the most beautiful X-ray photographs of any substance ever taken. Their excellence was the fruit of extreme care in preparation and mounting of the specimens as well as in the taking of the photographs.”
Rosalind Franklin was just 37 when she died. +

My last two monthly columns discussed the “why” and the “value proposition” for purchasing and using technology. In this column, let’s discuss how artificial intelligence (AI) may become the artificial neural network (brain) behind machine learning and output. By definition, AI is generally known as the ability for a computer to think, learn, and mimic human behavior.
I enjoy reading articles forecasting how technology (especially medical technology) will change our lives. According to researchers, the current uses of medical AI only relate to diagnostics and prognoses predictions. Global medical databases are linked to form a digital mind, which could be described as an artificial neural network. AI-based technology is connected to these artificial neural networks to introduce verbal and written language output as compelling and convincing responses to inquiries and tasks. If digital minds become a new form of cognitive intelligence, could they become the problem solvers for medical personnel or augment their decision-making? When you call 911, will we have a social AI language generator responding to the caller? If these applications are worrisome, imagine the potential for AI to replace human decision-making for not only medical machinery but also for autonomous vehicles, commercial and military drones, drug development, and medical personnel.
There are two social language AI applications currently available in the marketplace: ChatGPT and Bing Chat. These chatbots are used for automated response systems and content generation. Data is being captured, stored, and programmed to augment our interactions with technology. You might wonder who is overseeing these advancements, certainly a valid concern. If AI is creating content, can I trust the information I read in journals or see in videos? Could AI become an instrument for diseminating misinformation, alternative thinking, and personal tracking, or will its information add value?
According to Microsoft, “Bing Chat is a fullfledged AI chatbot integrated with Microsoft’s search engine. Bing Chat offers an experience that allows you to have a more natural conversation with the chatbot to get answers to long and complex questions. Bing AI search can answer your questions and refine your search by asking follow-up questions or giving feedback. Bing AI search draws on the deep knowledge base behind Bing search, so you can trust that you will get reliable and up-to-date results.”
Well, this all sounds interesting, so I asked Bing Chat if I could rely on ChatGPT and Bing Chat responses to questions as being unbiased and trustworthy, and I was surprised by the answer:
No, you cannot rely on the Chat GPT and Bing Chat responses as unbiased and trustworthy information. Both chatbots are based on artificial intelligence models that learn from large amounts of text data, which may contain errors, biases, misinformation, or outdated information. Moreover, both chatbots may generate texts that are not factual, logical, or coherent, depending on the input and the context. Therefore, you should always verify the information provided by the chatbots with other reliable sources and use your own critical thinking skills to evaluate their responses. ChatGPT and Bing Chat are meant to be fun and educational tools, not authoritative sources of information.
Maybe trust concerns like those mentioned above prompted the Future of Life organization to issue a letter asking for a voluntary pause in artificial intelligence development. Roger McNamee (Co-Founder and Managing Director of Elevation Partners and CNBC Contributor) is advocating for legislation to form an oversight body similar to the FDA. President Biden recently stated that it remains to be seen if artificial intelligence is dangerous, but that he believes technology companies must ensure their products are safe before releasing them to the public. But this begs the question, “What does safe mean?” Do we, as consumers, trust this technology enough to provide personal medical or other information that could be used in harmful ways? Even Elon Musk is calling for a pause in AI development.
As an educator, I like to stay current with the tools availed to students that enable them to supplement learning with technology-generated assistance. Programs like Grammarly or Editor are teaching aids that help students with writing skills. However, the new AI-generative language programs such as Chat GPT and Bing Chat can enable students and others to generate content, including videos, complete essays, and video scripts just by running a query. A student or blogger can upload an outline and ask the chatbot to fill in the content. The chatbot will generate “compelling and convincing” content in response to the request. Supporters of generative language suggest this type of content generation assists writers by giving them compelling videos and manuscripts to aid in their presentations and writing. However, I believe this type of con-