It is the age of generative AI. One of the hot issues is by far GPT series. It provides somehow general AI (of course it is still very far from the original concept of general AI). People can ask questions and it answers probably one out of lots of text utilized for training transformers.

One of the features I believe useful is to rephrase/rewrite/improve one’s writing. One doesn’t have to complete one’s writing because the mechanism of LLM catches only tokens and gets semantic meaning out of it. Getting semantic meaning out of token vectors doesn’t require a line with the double and third complexed intention behind them. So, the text is purely the method to exchange the opinion, GPT is truly the efficient tool.

If anyone spends time polishing the text, it is not an economically right job. However, if it boosts deteriorating our brain’s power to think, then it becomes a different story. Then, for my future kids, it is not wise to encourage them to use GPT to save their time. So, in my opinion, we should ponder upon the right case to use GPT in our life.

If one needs to write meaningless letters or applications, then using the power of GPT seems okay. You don’t have a lot to leaning writing a contract or recommendation letter for an internship position. However, as long as the content from writing needs to contain your thought, then it’s not appropriate.

So, here’s the following question “Can distinguish the text from GPT?” One can say we need to train an AI classification algorithm to tell if this text comes from GPT or not. Then, I argue that I can develop a GPT model which can deceive the classification algorithm. It’s the question of the Actor-Critic problem from reinforcement learning. It’s a matter of the dataset you secure, but given that GPT training is expensive and cumbersome, so if the same (level-of) developers are working for it, then the classifier is likely winning.

Here’s what I’m casting a question.

==== between the above and below text, do you think which text is generated from GPT? and what is the reason you pick? ====

We are currently in the era of generative AI, and one of the prominent topics of discussion is the GPT series. While it does offer a form of general AI, it is still far from achieving the original concept of true general AI. Users can ask questions, and GPT attempts to provide answers based on the vast amount of text it has been trained on using transformers.

One useful feature that I find valuable is GPT’s ability to rephrase, rewrite, and enhance one’s writing. It saves users from having to fully articulate their thoughts since the language model comprehends the tokens and derives their semantic meaning. This eliminates the need for convoluted and complicated intentions behind each line. In essence, GPT serves as an efficient tool for exchanging opinions through text.

Economically speaking, it may not be sensible to spend time refining the text oneself. However, if using GPT diminishes our cognitive abilities and hampers our thinking processes, then it presents a different scenario. In that case, it would be unwise to encourage future generations, such as my own children, to rely on GPT to save time. Thus, in my opinion, we should carefully consider when and where to use GPT in our lives.

When it comes to writing meaningless letters or applications, leveraging the power of GPT seems acceptable. There isn’t much to learn when composing a simple contract or recommendation letter for an internship position. However, when the content of the writing requires the expression of personal thoughts and ideas, relying on GPT becomes inappropriate.

This brings us to the question: “Can we differentiate between text generated by GPT and human-authored text?” Some might argue that we need to train an AI classification algorithm to discern whether a given text originates from GPT or not. Nevertheless, I contend that it is possible to develop a GPT model that can deceive such a classification algorithm. This raises the issue of the Actor-Critic problem in reinforcement learning. While the outcome depends on the dataset available, considering the cost and complexity of GPT training, if both the classifier and the GPT model are developed by the same skilled developers, the classifier is likely to prevail.

Updated: