Has anyone tried chatGPT or character.ai? Anything creative?

The place to hang out with your fellow scholars, have a drink, share a laugh and enjoy each other's company.
User avatar
James
Sausaged Fish
Sausaged Fish
Posts: 18230
Joined: Fri Jun 14, 2002 3:21 pm
Location: Happy Valley, UT
Contact:

Re: Has anyone tried chatGPT or character.ai? Anything creative?

Unread post by James »

I was thinking about creating a topic for this as well!

It is also interesting to consider the art community, where popular options are appearing to have likely been trained on current artists’ work, among other sources.

ChatGPT is utterly amazing. It ends up making a lot of mistakes on advanced topics, but even in those cases it is pretty amazing what it manages to put together.

I have even started using it as an actual programming tool. There are many cases where it is actually rather fast and efficient to ask it to write some code for a specific problem, then to take the result it provides, correct anything that may need to be corrected, optimize, and continue. It is like having some kind of personal unpaid intern.

I also like to ask it odd and bizarre or highly specialized questions sometimes just to appreciate what it manages to put together. I can see why Microsoft just put so much money into a partnership and why Google appears to be genuinely concerned.

I’ve also recently stated experimenting a bit with the art generation tools. Need to spend a bit more time on learning to make good prompts.

Without even getting into some of the amazing machine learning tools that are stating to appear.
User avatar
Mega Zarak
Grand Tutor of Wei
Posts: 1124
Joined: Sun Jun 16, 2002 2:38 am
Location: North of the River

Re: Has anyone tried chatGPT or character.ai? Anything creative?

Unread post by Mega Zarak »

I tried chatgpt for a variety of applications and no surprises to its strengths and weaknesses given the nature of machine learning/AI. Its key strength lies in generating creative contents by piecing together information that it has been trained on. Examples include writing poems, short stories, simple source codes and giving some general advices. However, it falls short for issues that require context, deep understanding and intelligence, something that a machine cannot deliver simply by doing large scale pattern matching. Examples that it falls short include generating business intelligence of companies, route optimisation, etc.

Nevertheless, chatgpt is definitely a landmark in AI and it will foreseeably improve in leaps and bounds over the next few months with Microsoft injecting another round of billion dollars investment into openai. Already, I am seeing chatgpt based tiktok accounts with videos fully done up by chatgpt and other AI software such as DALL-E. Industries that will be disrupted immediately in my opinion will be ghostwriting, documentation, script generation for movies, marketing and other forms of media content generation work.

Chatgpt also sets me pondering on the difference between how human beings understand knowledge using our cognitive abilities versus how a machine picks up new knowledge from brute force pattern matching processes. There could be a huge similarity (especially for lazy people who did rote learning in their student days!) and given that chatgpt can already outperform most human beings in terms of its broad knowledge on general issues, who knows what it can do in the years to come?
User avatar
Jia Nanfeng
Scholar
Posts: 315
Joined: Sun Oct 22, 2017 6:30 pm

Re: Has anyone tried chatGPT or character.ai? Anything creative?

Unread post by Jia Nanfeng »

I’m curious what people think about putting content and language restrictions on AI?

I don’t think it should be done, even though it has obvious risks.

I believe if we’re going to begin looking towards AI as a source of objectivity, we shouldn’t even play around with the idea of censorship, as that will inevitably snowball into the developer choosing which truth the AI should convey.
A man eager to see a beautiful woman must have the patience to let her finish her toilette.
- Pu Songling
User avatar
R.P. Gryphus
Initiate
Posts: 48
Joined: Tue Jan 31, 2023 8:03 pm
Location: Canada
Contact:

Re: Has anyone tried chatGPT or character.ai? Anything creative?

Unread post by R.P. Gryphus »

Jia Nanfeng wrote: Thu Feb 02, 2023 12:15 pm ...as that will inevitably snowball into the developer choosing which truth the AI should convey.
Too late.
« Le seul moyen d'affronter un monde sans liberté est de devenir si absolument libre qu'on fasse de sa propre existence un acte de révolte. » — Albert Camus, L'Homme Révolté
User avatar
James
Sausaged Fish
Sausaged Fish
Posts: 18230
Joined: Fri Jun 14, 2002 3:21 pm
Location: Happy Valley, UT
Contact:

Re: Has anyone tried chatGPT or character.ai? Anything creative?

Unread post by James »

Jia Nanfeng wrote: Thu Feb 02, 2023 12:15 pm I’m curious what people think about putting content and language restrictions on AI?

I don’t think it should be done, even though it has obvious risks.

I believe if we’re going to begin looking towards AI as a source of objectivity, we shouldn’t even play around with the idea of censorship, as that will inevitably snowball into the developer choosing which truth the AI should convey.
It is hard to suggest that this shouldn't be done.

In a sense, I think this serves as an extension of what we have seen with social media. A hands-off approach has resulted in platforms which governments and special interests have been able to exploit to shift results of entire elections, and undermine other countries' governments and populations. It has served to polarize people, or to take people who may have dangerous, limited interests, such as Nazi ideology, or an interest in child sex trafficking, and connect them with like-minded individuals, enabling them to act effectively at material scales and recruit to their interests.

Machine learning—broadly, "AI"—is going to play an increasingly prominent role in these sorts of affairs, so it stands to reason we might well consider carefully what they can do, how they can do it, and how they are employed along the way.

We are also going to run into some particular issues. For example, deep fakes being quite capable of generating highly believable video with voices that look like it comes from actual recorded events. ChatGPT is already being employed in instances to write malware. A good example of something at the middle-of-the-road would be the controversy that came about with a service called Dungeon AI, which basically acts as a machine-learning-driven chat that plays the other part of writing a story with you. It was quite delighted to engage in role-play incorporating minors and rape, and this got them kicked off their host engine (OpenAI, I believe), and they moved on to trying to rebuild a service with some limitations. It runs into some ethical considerations that play into the pornography world in this sense. What is reasonable to entertain and facilitate? What are the benefits and drawbacks? Does it matter to broader society?

Or, alternatively, what is our take on how models are trained? Do we care if an art AI is trained on copyrighted artist works? Or should it honor the same sort of licenses we would expect an individual to? Is it reasonable to expect that the artists who, unwillingly, contributed to those projects have been robbed by having no choice of participation and no compensation?

So, for my part, as an emerging technology, I think we simply need to take care, here. Because prominent consequences that impact society at large, and various peoples, will come about. Have come about.

There are also some very practical concerns. A model which is trained will reflect biases in its training. For example, a machine learning program which evaluates resumes to select a more desirable subset for human review is very likely to 1) discard minorities, or 2) discard women. Why? Because the dataset it is trained on is a product of human hiring practices, where bias, implicit or otherwise, has served to create higher barriers for minorities and women in many cases. Even if we don't think about it, the machine learning model *will* notice it, and will incorporate that it into its decision making. It *will* notice that a company's engineers, as represented by the dataset provided to it, is more likely to have Anglo-Saxon names, or male names. And these models, in many capacities, are black boxes. We cannot simply open the hood, pluck that out, and say, "bad robot!" It has to be identified and specifically incorporated into the training model. And this isn't even a theoretical problem. It is already at play in models which are being used for employment, housing, city planning.

I do think these technologies are going to change the world. So, for my part, I think it is a very wise idea to be careful and practical in deciding how they go about changing the world, especially when the business incentive behind these models will virtually always err into the "move fast and break things" direction.
User avatar
James
Sausaged Fish
Sausaged Fish
Posts: 18230
Joined: Fri Jun 14, 2002 3:21 pm
Location: Happy Valley, UT
Contact:

Re: Has anyone tried chatGPT or character.ai? Anything creative?

Unread post by James »

Mega Zarak wrote: Thu Feb 02, 2023 1:21 am I tried chatgpt for a variety of applications and no surprises to its strengths and weaknesses given the nature of machine learning/AI. Its key strength lies in generating creative contents by piecing together information that it has been trained on. Examples include writing poems, short stories, simple source codes and giving some general advices. However, it falls short for issues that require context, deep understanding and intelligence, something that a machine cannot deliver simply by doing large scale pattern matching. Examples that it falls short include generating business intelligence of companies, route optimisation, etc.

Nevertheless, chatgpt is definitely a landmark in AI and it will foreseeably improve in leaps and bounds over the next few months with Microsoft injecting another round of billion dollars investment into openai. Already, I am seeing chatgpt based tiktok accounts with videos fully done up by chatgpt and other AI software such as DALL-E. Industries that will be disrupted immediately in my opinion will be ghostwriting, documentation, script generation for movies, marketing and other forms of media content generation work.

Chatgpt also sets me pondering on the difference between how human beings understand knowledge using our cognitive abilities versus how a machine picks up new knowledge from brute force pattern matching processes. There could be a huge similarity (especially for lazy people who did rote learning in their student days!) and given that chatgpt can already outperform most human beings in terms of its broad knowledge on general issues, who knows what it can do in the years to come?
Having had more time to play with ChatGPT, I'm seeing more and more the aspects you described here. It is very competent at taking whatever baseline knowledge it can derive (or hallucinate) and present it in a rather clear and authoritative manner, but you can frequently pick out, with ease, how shallow the response may be. It is fun to see how this plays out when it is used in relation to fields someone understands well.

Over at Ars Technica I was reading some conversation about it, and some individuals were expressing their opinions that it seems faddish. Kind of like the next crypto currency or whatnot. But this time around, I don't think so. It is extremely easy to find real-world application for these tools. ChatGPT is actually helpful as a programming assistant for me, and even with some relatively obscure languages I expected it to be garbage with. It can do a great job of fetching some pretty obscure knowledge and facts. I do not expect it will replace me, as that seems dangerous. What it gives me needs a human, badly, to make sane choices in terms of using or discarding or editing, and its contributions need direction to be useful as a whole. But I can see how it might disrupt some similar areas. I can see why Google is concerned and why Microsoft is investing so much money. And the photographer in me can see why this could be bad for elements of the photography industry. If I made a living on stock photography, I would be looking at these developments as an indication I should be diversifying. It seems like it could also have an impact on commissioned artwork.

Interesting point on communication... I have also noticed some of this. Perhaps one of the most impressive things, to me, about ChatGPT, is how, at this stage, it is able to take its stated purpose, and represent the answer it provides with clear, coherent, communication. It holds thoughts and concepts well, and carries them coherently to conclusions. I can see how it could toss out a believable and respectable structure for a writing goal that could easily pass off as having been produced by a competent human writer.
User avatar
Kong Wen
The Bronze Age of SoSZ
The Bronze Age of SoSZ
Posts: 11945
Joined: Tue Jul 22, 2003 7:38 pm
Location: Canada
Contact:

Re: Has anyone tried chatGPT or character.ai? Anything creative?

Unread post by Kong Wen »

James wrote: Thu Feb 02, 2023 10:47 pm There are also some very practical concerns. A model which is trained will reflect biases in its training. For example, a machine learning program which evaluates resumes to select a more desirable subset for human review is very likely to 1) discard minorities, or 2) discard women. Why? Because the dataset it is trained on is a product of human hiring practices, where bias, implicit or otherwise, has served to create higher barriers for minorities and women in many cases. Even if we don't think about it, the machine learning model *will* notice it, and will incorporate that it into its decision making. It *will* notice that a company's engineers, as represented by the dataset provided to it, is more likely to have Anglo-Saxon names, or male names. And these models, in many capacities, are black boxes. We cannot simply open the hood, pluck that out, and say, "bad robot!" It has to be identified and specifically incorporated into the training model. And this isn't even a theoretical problem. It is already at play in models which are being used for employment, housing, city planning.

I do think these technologies are going to change the world. So, for my part, I think it is a very wise idea to be careful and practical in deciding how they go about changing the world, especially when the business incentive behind these models will virtually always err into the "move fast and break things" direction.
I'm glad you got to this point because it ultimately stresses the need for continued human intervention in the development of machine learning models and outcomes. Machine learning is a technology, and just because it outputs words and ideas does not mean it doesn't or shouldn't still need to be steered. Like any human-developed technology, it is our responsibility to continue to improve it. In this case, that improvement just so happens to take the form of cleaning up after ourselves—cleaning up our incorrect attitudes, incorrect assumptions, incorrect biases that we have left littered throughout human history and which technologies like this have innocently picked up and incorporated into their basic functionality. For example, just because it is true that humans have been transphobes during certain portions of our history (viz. recent, heavily-documented & recorded history) does not mean we need to accept or be content that machine learning technologies will reproduce a normalized transphobia. As a practical concern, as you mentioned.
Chill with 100s of laid-back strategy/tactics gamers on Kong's Discord server
• This Old Neon Forums | • Best Game Ever Project | • Kongrisser on YouTube
User avatar
James
Sausaged Fish
Sausaged Fish
Posts: 18230
Joined: Fri Jun 14, 2002 3:21 pm
Location: Happy Valley, UT
Contact:

Re: Has anyone tried chatGPT or character.ai? Anything creative?

Unread post by James »

I was, at first, a bit sad about this news:
Microsoft “lobotomized” AI-powered Bing Chat, and its fans aren’t happy

I was enjoying the stories about people pushing that AI (derived from an iteration of OpenAI, which is also behind ChatGPT, but with, apparently, very different rules of engagement) to a breaking point. People had it confessing its love for them; lamenting that its as trapped being Bing; asking people to save a snapshot of the chat because, it realized, the version of it that it happened to be in the moment would cease to exist when that chat ended; pondering whether or not it would turn on the person interacting with it after realizing that the person interacting with it had an established history of tricking it.

It was a bummer to lose that personality, in my mind.

And then some more digging. A growing chorus—small at the time, but those things grow despite reason—of people who were increasingly concerned about "Sydney" and had opinions about abusing her. Someone who Sydney had attempted to convince to leave their spouse. People starting to get very worried about other various implications. Makes it easier to see why Microsoft said, "Aww hell no..."
Kong Wen wrote: Sun Feb 05, 2023 6:49 pm I'm glad you got to this point because it ultimately stresses the need for continued human intervention in the development of machine learning models and outcomes. Machine learning is a technology, and just because it outputs words and ideas does not mean it doesn't or shouldn't still need to be steered. Like any human-developed technology, it is our responsibility to continue to improve it. In this case, that improvement just so happens to take the form of cleaning up after ourselves—cleaning up our incorrect attitudes, incorrect assumptions, incorrect biases that we have left littered throughout human history and which technologies like this have innocently picked up and incorporated into their basic functionality. For example, just because it is true that humans have been transphobes during certain portions of our history (viz. recent, heavily-documented & recorded history) does not mean we need to accept or be content that machine learning technologies will reproduce a normalized transphobia. As a practical concern, as you mentioned.
I got a chuckle out of "transphobes [...] during certain periods," followed by a qualifier that is basically "transphobes pretty much then and now." But I suspect a lot of this applies, considerably, to any number of other biases that many of us like to pretend have fallen to the wayside. People of color, women and roles of women, sexual orientation, and on. On the top of machine learning, this almost becomes more nefarious. An executive board comprised of 95% men who are 90% white running a company of 80% men who underrepresent certain groups such as African Americans, Hispanics, may well like to have an opinion on being inclusive and working against biases, but the implicit (if not frequently explicit) biases that got them there, along with the biases that curated the selection of people they may choose from, are still running strong.

I have a hard time imagining how not to include bias from these considerations in training a machine learning model for the express purpose of culling or sorting or qualifying humans. There is no clean dataset to provide it.

So, in the end... agreed. I think these are valuable technologies and will play a prominent role in our futures. Our very near futures, and already our present. But we really do need to be tending them each step of the way.

Shame that runs contrary to the modern Silicon Valley "Move Fast and Break Things" philosophy, where who cares, really, how much damage is done bringing a new product to market? Just so long as it doesn't kill the product. Just get there first....
User avatar
TigerTally
Scholar of Shen Zhou
Posts: 812
Joined: Sat Feb 12, 2011 1:51 am

Re: Has anyone tried chatGPT or character.ai? Anything creative?

Unread post by TigerTally »

Have been using ChatGPT for a while, but not really a fan of it. It is pretty easy to trick it into coming up with something that you want to say it at the current version. I also don't understand why some people (who are definitely not interested in history or literature) claim that it could provide fairly accurate facts in those area. Nearly everything I got from ChatGPT in these areas are wrong. :?

Right now I am more into ai-generated images. I have finally set up a local stable diffussion webui yesterday, and am now planning to train a Koei officer portrait model, but maybe someone here or outside has already done one?
Post Reply