Points of Consideration Before Implementing ChatGPT

Photo by Sanket Mishra: ChatGPT Considerations.

ChatGPT can readily traverse the vast amounts of information on the internet to answer almost any ad-hoc question users pose. That it does so via natural language, in close to real-time, is indicative of the immense advancements of Generative Artificial Intelligence—and of Natural Language Generation, in particular.

ChatGPT’s practical utility spans most tasks associated with language, including creating annotated training datasets for data scientists to creating highly specific reports, emails, or papers for almost any facet of business or academia. Not surprisingly, vendors of all types are rushing to implement this language model to improve solutions for everything from Business Intelligence to content services.

Nonetheless, it’s paramount that shrewd organizations temper the adoption of ChatGPT for reasons other than its dubitable rates of accuracy for question answering.

“It’s still got its own set of limitations,” admitted Abhishek Gupta, Principal Data Scientist, and Engineer at Talentica. “It just blindly follows what a human is asking it to do, and a human can be wrong. It lacks, to a certain extent, conversational or logical reality.”

From a more pragmatic standpoint, however, it’s quite possible that the language model powering ChatGPT is even more utilitarian than ChatGPT itself. There are also certain logistical factors pertaining to data privacy, regulatory compliance, and data security that must be addressed before it’s advisable to broadly implement ChatGPT for enterprise use cases.

GPT is Better?

A crucial caveat for users hastening to implement ChatGPT into organizational processes is the distinct possibility that the language model it relies on is even more utilitarian to the enterprise. “The underlying big language model is GPT-3.5,” Gupta specified. “That’s what ChatGPT uses. Over that, there is a huge prompt dataset that is created and fine-tuned on that. But, the whole power lies in the underlying language model, which is GPT-3.5”  Whereas ChatGPT is primarily lauded for its question-answering and generative capabilities, GPT-3.5 typifies the multitask learning phenomenon in which a single language model can be readily trained to handle any task.

Data Science Implications

According to Gupta, “GPT-3.5 is a large language model that understands the structure of the language, grammar, and the meaning of sentences. With it, we can generate text and, in some use cases, generate code. We can provide commands or a set of examples to achieve a task in NLP.” By relying on this model that’s foundational to the acclaim of ChatGPT, organizations can tailor their own data science initiatives. GPT-3.5’s propensity for few-shot learning and prompt engineering (which all but replaces feature engineering by enabling users to simply ask the model, the right way, to perform a task) extends the usefulness of data science to the enterprise.

It also transitions this discipline from a specialized one for rarified technical users to one accessible to non-technical users. There is an assortment of applications to which organizations can apply this language model. For example, “If you want to categorize websites into different verticals, you can just give the content of the site and ask the system what’s this site about, what category does it belong to,” Gupta remarked.

Data Privacy and Security

In addition to the vast possibilities that ChatGPT’s underlying language model can provide, another pragmatic concern about employing this option pertains to data protection and regulatory compliance. ChatGPT is accessed through APIs made available by the company responsible for launching it, OpenAI. According to Gupta, there are several organizations for whom it’s not feasible to “share their data direct with APIs exposed by OpenAI.”

However, it’s possible to obfuscate the data organizations have prior to relying on APIs to access ChatGPT. The proper implementation of techniques such as masking allows users to “analyze patterns from the documents while hiding a person’s name, or important dates, and information that’s not supposed to be exposed,” Gupta explained. Machine learning approaches can facilitate these advantages at scale so that enterprises can preserve the privacy of sensitive information in datasets or documents. That way, organizations aren’t “sharing personal data or data that’s not supposed to be shared, and convert it to a hidden form, and then use [OpenAI’s] API,” Gupta indicated.

Taking Precaution

ChatGPT has far reaching consequences for everything from basic processes of generating text to answering questions in natural language in close to real time. However, it’s critical for organizations to realize that its underlying language model, GPT-3.5 and GPT-4.0, can possibly power an even broader range of enterprise use cases.

It’s also imperative to safeguard sensitive information in the datasets organizations utilize with this approach before making it available through APIs. Those who heed these precautions may very well reap the full benefits of this expression of deep learning.

About The Author

Scroll to Top
Share via
Copy link
Powered by Social Snap