My opinion on chatGPT Translated with ChatGPT

Resume
Summary:
You've undoubtedly noticed, but I'm using chatGPT more and more. So, I thought it would be interesting to give my average user opinion on generative AIs.
Preamble
For convenience, in this article I will use the term AI. However, I want to emphasize that this term is improper and more refers to a marketing reality than technical. In fact, here is the only definition of AI that, for me, stands the test of time: A type of algorithm so new that no one, not even its designers, really knows how it works and what it can (and cannot) do.
Indeed, this is not the first 'revolution' of AI that we are experiencing. Every 10 years or so we make a breakthrough in algorithmics that allows us to do things we thought were impossible. Marketers, startup founders and other swindlers working for the well-being of humanity are screaming to anyone who is willing to give them a microphone that the singularity is at our doorstep and that in exchange for a tiny billion euros of investment in theirPonzi pyramidDisruptive company, there will be the advent of paradise on Earth, wealth and the return of the beloved.
Then time passes, and its new algorithms are used to create real products, which are sold to real customers and generate real value (and real nuisances).
Yes, in theory, we could use these technological advances to reduce nuisances and pollution without having to reduce production, but in practice, we never do it.
It's one ofAs an AI, the input you provided does not contain any text to translate. Please provide a text in foreign language so I can translate it into English for you.Reasons to think that technology alone will solve our ecological problems is madness. Even if it had the capacity (which it ostensibly does not), we would still need to decide to use it for this purpose. Or even just pay engineers and researchers to conduct research in this regard.
But, in short, once technology is widely understood and used, we stop talking about AI and give its new programs a name that better reflects their true nature. Nothing indicates that things will be different this time. However, I don't know what name will eventually be chosen and I am very bad at naming things myself. So, I will settle for using this improper term.
The usefulness of AI
The present
First, let's start by assessing what AI can currently do.
For the coding
In coding, ChatGPT really helps provided that you consider it as a human with its faults and qualities, not as an omniscient god. And not just any human. ChatGPT is a human with an encyclopedic memory but somewhat obstinate, incapable of finding innovative solutions and a yes man. When it suggests something, and we tell it that it doesn't work, it proposes a way to resolve it, but if that way does not work, it will simply suggest trying the same unsuccessful solution over and over again.
It's up to you, human, to imagine an alternative resolution process. ChatGPT is your 50-year-old colleague who possesses incredible knowledge but has a big ego and is incapable of adapting to new ways of doing things. Also, he is especially incapable of considering that he could be wrong, unless the problem is simplified so much that only this solution remains to be considered (and even then).
In a word, he's a teacher (but an excellent teacher). Working with him is very useful provided you know how to approach him. If you expect him to do the work for you, he will waste your time. However, he can teach you better than any online tutorial how to do it.
And of course, it can perform the most tedious coding tasks on your behalf. However, it should be noted that initially, beginners should avoid relying too much on this function, as repeating these tedious tasks allows for the acquisition of basic knowledge and automatic skills that are extremely useful.
For the translation
For translation, it's just great and unless you're called Disney and want translations with a high level of artistic quality, I no longer see the point of hiring professional translators. At least between English and French. For lesser known or very different languages like Japanese, I don't know.
For writing
For the orthographic review, specialized tools likeLanguageTool are cheaper and provide better quality work. Not because he's bad at spelling, but because he has a very strong tendency to change the meaning of the text (even when explicitly asked to just correct the spelling and not change anything else). And also because there is currently no way to use it natively in Word or LibreOffice. This implies that it cannot be used without destroying the text's formatting.
To get a review of a text and writing advice, it's very variable, but in general, it's crap. Basically, he absolutely wants you toModify your text to adopt the style that I find particularly detestable of journalists. You know this style, lifeless, trying to make you believe that it has no personal opinion, that it is neutral and objective. Trying not to take any side and especially not to offend anyone.
Personally, I find that this style is the literary equivalent of an industrial plain yogurt. It's not bad in itself, but it doesn't really have any taste. Of course, you will tell me that we can ask the chatbots to change personality, but I have not found a way for it to embody a proofreader role that suits me (but maybe it's me who is incompetent).
For political opinion pieces, it's annoying, but for novels, it's a deal breaker.
For instance, when I tried to ask him for an opinion on the first chapter of my Harry Dursley fanfic, he was traumatized that my 6-year-old Harry called Dudley a 'fat stinky'. He advised me to change the dialogues of the young 6-year-old Harry so that they appear less immature and vexing, even though it was obviously intentional on my part to have him speak like a young child.
That is for the formality, but the most problematic thing is that he refuses to comment on the substance. Or else he will make a critique based on what I've told him myself.
However, sometimes, he has ideas for rewording or rearranging that greatly improve clarity. But this is rare. On the other hand, I find that he is very useful for source research. It saves time on this activity. However, unless you say something completely foolish, don't expect him to tell you that what you're saying is false or illogical. For chatGPT, the customer is always right. Therefore, make sure to check the sources he gives you in case they lead to a non-existent site or they say the opposite of what chatGPT announces (and therefore your article contains false information to delete that you had missed).
For the generation of image
Technically
For the generation of images or videos, I think that graphic designers and other professions in audiovisual have nothing to worry about for many years to come.
It generates good images, but it's impossible to ask it to modify the details that we want.
It is a tool that allows people with no baggage to create beautiful things and a help to gain inspiration, but impossible to have an image that meets the level of detail required in advertising or video gaming (but not working in these areas, I might be mistaken).
For me, in the field of audiovisual, AI does not come to disrupt an existing market, but to create a new one. A bit like photography, which did not make painting disappear, but allowed Mr. Everyone to have pictures of himself and thus created a parallel market meeting needs other than that of painting (even though I am aware that the analogy works imperfectly since the invention of photography greatly changed the way we paint).
Little tip: don't ask it directly to generate images, but texts precisely describing the image you want. Image generation takes a lot of time and once it's done, it will refuse to change its approach. To get a good result, it's therefore crucial to discuss with ChatGPT about what you want.
By the way, do not hesitate to specify the style (painting, pastel, French-Belgian comic style,...) to him.
Politically
But when we talk about image generation, the most common objections are not about the AI's ability to create beautiful images, but about ethical issues. Particularly about respect for copyright.
As far as I'm concerned, I am against intellectual property rights and that's why all the (mediocre) content I publish is under a Creative Commons license. But explaining this position would require an entire article. So I will just redirect you to this video:Intellectual property #Mindset 14 and sum up my position by saying that it would be more appropriate to rename copyright as producers' rights, since most of the time copyright is used to take money from authors rather than give it to them (This is incidentallyVery visible on YouTube).
For me, if we are really concerned about the fate of artists, we need to fight against copyright and not defend it. For me, if we want to defend artists from the very real changes it will provoke, what's needed is not a law protecting copyright, but a law protecting the artists.
And this is a fight that did not start with AI. Indeed, producers did not wait for AI to underpay (or not to pay at all) artists. Or to impose insane working conditions on them. For me, it is not about focusing on whether or not to use AI, but on supporting artists' unions. If you want to help them, keep yourself informed about social movements through media like:power relations orYou didn't provide a text for me to translate. Please submit the text you would like translated into English. a me (for the video game) and support them (by giving them money or by following their possible calls for boycott).
But this does not prevent that on a personal level, as long as there is no right protecting authors from the ravages caused by AI, I plan to boycott all companies that would use AI (I'm not sure I can keep this good resolution, but I will try).
For the sharing of knowledge and creation.
If from an ecological perspective the multiplication of creations by AI is negative, from a human perspective it's wonderful. It allows everyone the opportunity to create and learn. For me, AI will enable the same phenomenon as the development of the internet or printing did in their time: an unprecedented democratization of access to knowledge and the ability to create and thus express oneself.
After, as with the web or printing, it will not be all positive, but I think that overall it will be positive.
Special case: autonomous vehicles
It's a bit off-topic, but I would like to talk about the case of autonomous vehicles and the criticism made against them: it wouldn't work.tNo, because they cause fatal accidents. Only I will shock you but this is also the case with humans. So, the question is not whether AI causes accidents, but whether it causes more than humans.
Without large-scale tests, with public data, it is impossible to determine who is the worst driver between AI or humans. Both kill children, but which one kills more remains a mystery.
At first glance, we currently seem to have a very similar level of child deaths between AI and humans. The quantitative difference appears to be quite small.
But this observation hides a significant qualitative difference:
-
AI kills in situations that humans would have easily avoided.
-
Conversely, humans kill in situations that AI would have easily avoided.
Indeed, among humans, the main causes of accidents are:
-
Alcohol,
-
Speeding,
-
Non-compliance with safety distances.
In AI, in principle, it's rather errors like: Mistaking a child for a fire hydrant, and therefore not anticipating that the child will cross the road.
Therefore, to improve safety, it is necessary toremove the cars,Combine the two.
For example: by default, the human drives, but if the human is intoxicated, the AI automatically takes over. Moreover, the AI could permanently enforce the observance of safety distances and adherence to speed limits.
Specific case: the correction of copy and evaluation.
A friend who works at a vocational training center told me that they recently fired people responsible for correcting student papers, among other things, and replaced them with AI.
Personally, I find this stupid. Besides the financial savings, the main argument put forward is that AI would be more objective than a human corrector, as it does not have biases related to fatigue (on the last paper, for example) or emotion (if we know the student). And that anyway, a human continues to check the work of the AI.
But this argument seems flimsy to me. Let's be honest, a human is never going to bother to go after the machine. Of course, in the beginning, they will do it but very quickly, they will be content to say that everything is fine without re-reading anything.
Then, who controls the grading criteria? And how to evolve them? If we find out that the AI consistently gives lower grades to the copies.Signed "Mohammed" than those signed "Jean-Eudes", or whether she grades on absurd criteria (such as the number of citations or the cited author), how could we know and correct the problem? If what we expect from apprentices changes, how to make the AI understand it. For me, replacing human proofreaders with a black box is madness.
After this, remains a technical problem that can be corrected (by regularly giving a lot of money to the company that develops AI), but the real problem lies elsewhere: why on earth do we want to automate the correction of papers?
If the goal is just to rank students, then let's stop the essays and use multiple choice questions or logic tests, which can already be automatically corrected with technologies that are 20 years old. This will save everyone a lot of time and money.
If the goal is to see where the student stands in order to help him, what is the point of automating the correction? If the correction is automated, how will the teacher know what the student's difficulties are and how to help him?
And if the goal is just to make the student practice, why grade the paper?
In reality, even if the automation was technically well done, it would still be nonsensical. It simply proves that we organize exams automatically, without asking why we do them. It shows that education has largely become an absurd administrative machine, which forces people to perform repetitive tasks without any real purpose. We are far from a space whose mission would be to educate and empower (if ever it once was).
Special case: ecology
When we talk about AI, it is inevitable to address the issue of its ecological cost.
Indeed, AI has an immense ecological impact. But, like all our activities. For me, the question is not whether AI or any other activity has an impact, but whether we can justify this impact or reduce it.
Generating an image on AI requires as much electricity as charging a smartphone and as much water as flushing a toilet.
One could say that this is roughly what an artist who is creating an image with a graphics tablet or powerful computer would consume (and this is the case for most artists). And so, unless returning to paper and pencil, AI does not consume more (or not much more) than the current means of producing an image.
Only it would be forgotten that AI leads to a multiplication of image production. The production of images, once reserved for an 'elite' (with large quotation marks) suddenly becomes accessible to everyone. As a result, image production explodes, and with it the damage caused to the environment by this activity.
One might say that it is negligible compared to other activities such as air transport, but the problem is that each hundredth of a degree of additional warming will result in several hundred thousand deaths during this century. Hence, every action, however insignificant it may seem, counts.
And then one must be wary of this kind of argument. With the right breakdown and the right comparison, everything always seems insignificant. France is notReduce emissions, because they are insignificant on a global scale. We should not reduce the emissions of private jets, because they are insignificant compared to those of France,... With this kind of argument, we quickly come to the conclusion that we should do nothing. And the reason is simple: to solve the problem of global warming, we need a multitude of actions that are both individually insignificant but indispensable to solve the problem It's a bit like on a car assembly line where each employee only performs simple tasks like tightening a bolt or assembling two parts. Individually, each of their acts is insignificant but each one is indispensable to end up with a functioning car at the end of the line.
On the other hand, in the same way there is not just one way to build a car, there is not just one way to solve the problem of global warming. Depending on the solution that is chosen, there will be significant sacrifices and others that will be unnecessary. But to know this, a solution would still have to be chosen. So as long as there is no ecological planning, it's impossible to determine which actions are dispensable and which are indispensable.
Except of course for political activism to defend the environment and finally have a socially just, democratically decided ecological planning.
After all, there's nothing stopping you from aligning your lifestyle with the solution you promote. In itself, this won't have a significant impact, but it will allow you to advocate more effectively for this solution and to know if the solution you are promoting is realistic. On a personal level, I am vegan, I never travel by plane, I don't have a car, I live in a small apartment which I heat minimally and I do not use air conditioning, I...arfrom my clothes as long as possible (much to the chagrin of my mother who thinks I walk around in rags) I have a used computer that I haven't changed for 6 years, but on the other hand I consume a lot of computer services like YouTube videos or AI. Basically, I consider (perhaps wrongly) that my efforts in other areas.The text "ont" does not appear to be in a recognizable language context to be translated to English. In French, "ont" means "have", but without further context, a definitive translation cannot be provided. Please provide additional information.Sufficient to offset the environmental problems caused by my computer usage if we imposed on GAFAM to be more economical (for example by imposing that 480p is the maximum resolution of an online video or by prohibiting unnecessary uses of AI)
And then many of my creations are activist content. Now if this allows me to convince people to vote green, then it will be an investment more than profitable for the planet. Kind of like using your car to put up posters before an election.
For me, the real problem with using AI is that we feed our personal data and feedback to American companies run by fascists who will use it to build tools intended to muzzle us. But it's the same problem, magnified 1000 times, with the use of social networks like WhatsApp.
So would you be interested in switching to Signal or Matrix?
The future
That's for the current situation, but what about the future.
Provided that we invest an additional 10-20 years of work from the planet's brightest minds and commit an ecocide, I think that AIs will become extremely useful assistants and may even replace humans in many tasks.
The question of course is, is the game worth the candle?
And, I could be wrong, but I think that the future of AI is to become the number one means by which people inform themselves, particularly when making a purchase.Before, when people had a question, they would search on Google, on YouTube, or on a social network like Facebook. So, basically, on Meta or Google. They would then click on the first 3 links that Google or Meta had given them. So for a business, not being in the top 3 choices of Google or Meta meant dying.
So all the companies were giving heaps of money to Google and Meta to be in the top 3 results.Tomorrow, most people will only seek information by asking questions to an AI, and there will be a maximum of 3 consumer AI.
Result: if your company is not mentioned by these AIs when a customer is doing research with a view to purchasing, then you're dead.As a result, the owners of the AIs will be able to extract fortunes from other companies.
And of course, for propaganda, it will be better than anything the dictators of the 20th century have dreamed of. And the oligarchs have already understood this.Musk, for example, tampered with his AI so that it constantly talks about the white genocide in South Africa.(source : Elon Musk's AI company blames unsanctioned modification for chatbot's tirade about 'white genocide' )Of course, he defends himself and says that...there is for nothingEven if it's true (I find it hard to believe, but why not), I think it gave some people ideas.
Are AIs a speculative bubble?
Beyond their present and future usefulness, another frequently asked question is: is AI a speculative bubble that will soon burst, leading to the ruin of numerous savers who have invested their savings in the stock market.
For me, this question is very poorly posed, and that, for several reasons. Already because what we are talking about cannot be a speculative bubble.
Speculation, whether it's in a bubble or not, is buying something not to use or rent it, but to resell it at a higher price.
There is certainly speculation on AI-related titles, but no more than on other stocks. Speculation on AI is anecdotal and without consequence. In any case, no more consequential than the billions wagered in casinos or on horse races.
But in short, talking about a speculative bubble on AI is like talking about a Ponzi pyramid scheme.oTo describe the pay-as-you-go retirement system. This makes no sense.
Those who speak of speculative bubble to designate the massive, safe investments in AI, or the high stock valuations of American tech companies, are either ignorant, or liars, or lazy (I plead guilty to often using this expression out of convenience).
The real question to ask is what is the purpose of her investments in AI, what is the likelihood they will be achieved, at what price will they be achieved, what will be the related consequences of her investments in AI and whether it is good from a moral, economic, geopolitical perspective, ...
One could write books about her questions and people far more competent than I am are doing it.
So I will therefore abstain and just get back to the topic by giving the second reason why this question is badly posed: In this context, when we talk about AI, we lump together under this term two completely different things.
And yes, in addition to hiding our ignorance, the term AI creates confusion.
Indeed, chatGPT type chatbots have little relation to the algorithms used in Ukraine, Israel or during the Paris Olympics to process images from the thousands of cameras and drones deployed on these sites in order to detect threats/targets and possibly trigger an appropriate response automatically. However, when we talk about massive investment.In AI, a large part of the discussion is about investment made in its military and security AI.
In Ukraine, they proved that they were very effective at massacring their neighbor, and in the future, aearmedAs a translator, I need the text you want to translate into English to assist you.devoid of droneAs an AI language model, I need the text to translate. Please provide it.pilotedThe text is already in English, so no translation is necessary.by AI or by means of defending oneself will probably be assured of losing.
Well employed, AI and drones now have the potential to be as decisive on the battlefield as artillery (at least this is the feedback from French strategists who express themselves in the podcast:the crosshairs ).
In Israel or during the Paris Olympics, their usefulness was however very low. For example, in this conference:AI, for better or for worse? A Tales engineer in charge of testing his systems during the Paris Olympics mentioned that the system was confusing a spectator raising a finger with a terrorist raising a weapon. However, it could not detect police assault rifles.
But, with a bit of time and money (which would have been better spent elsewhere), I have no doubt that this technology from hell will emerge. In any case, talking about a speculative bubble that is going to collapse by itself for investments made in military and security AIs seems to me dangerously off the mark. Using the same term to talk about a chatbot generating images...Igolotes and its monstrosities, we allow those who invest in death to give themselves a veneer of coolness and morality that prevents them from being effectively fought.