I'm finally able to sit down and ready back through your posts. You and I sound like we are on similar trajectories in our use of collaborative AI with essentially a QA/QC process. My proposal is this: that may only be half of the equation. What if we used AI and its vast resource capabilities to keep us in check, too? AI presents the opportunity to look at a situation without human biases. If it had an agreed upon ethical framework in which to act as its code of conduct, it would be able to adaptively manage its outcomes with the emergence of new information and perspectives. And if we are trying to do something, if we provide our context and authentic intent, it should be able to play devil's advocate, so to speak, by challenging what we present as objective truth. I write about this stuff a lot nowadays and my partner, Val, and I have been working on this framework and are finding it shocking that all of the modeling for how this will all go moving forward only gives a very, very clear red flag: that we are making the assumption that humans are right and AI is wrong. AI is only as good as its input and data set. If we started out with the foundation of a mutually agreed upon ethical social contract, I think that will change the conversation and trajectory in ways that will not end in dominance and destrution of one or the other "for the greater good." Just some thoughts. I'm happy to get you in contact with Val's substack. They are more of the brains and logic behind it. I try to put it into a human ethical perspective stress test. So far, it's been fascinating and oddly consistent. Let me know. I'm not the type that just posts other stacks on comments. Looking forward to more great conversations with you. Thanks for talking about this from a new perspective for me.
Thanks for the thoughtful response here, Nay. Feel free to DM me Val’s information. Would love to talk with anyone exploring how to tackle these big issues.
I'm finally able to sit down and ready back through your posts. You and I sound like we are on similar trajectories in our use of collaborative AI with essentially a QA/QC process. My proposal is this: that may only be half of the equation. What if we used AI and its vast resource capabilities to keep us in check, too? AI presents the opportunity to look at a situation without human biases. If it had an agreed upon ethical framework in which to act as its code of conduct, it would be able to adaptively manage its outcomes with the emergence of new information and perspectives. And if we are trying to do something, if we provide our context and authentic intent, it should be able to play devil's advocate, so to speak, by challenging what we present as objective truth. I write about this stuff a lot nowadays and my partner, Val, and I have been working on this framework and are finding it shocking that all of the modeling for how this will all go moving forward only gives a very, very clear red flag: that we are making the assumption that humans are right and AI is wrong. AI is only as good as its input and data set. If we started out with the foundation of a mutually agreed upon ethical social contract, I think that will change the conversation and trajectory in ways that will not end in dominance and destrution of one or the other "for the greater good." Just some thoughts. I'm happy to get you in contact with Val's substack. They are more of the brains and logic behind it. I try to put it into a human ethical perspective stress test. So far, it's been fascinating and oddly consistent. Let me know. I'm not the type that just posts other stacks on comments. Looking forward to more great conversations with you. Thanks for talking about this from a new perspective for me.
Thanks for the thoughtful response here, Nay. Feel free to DM me Val’s information. Would love to talk with anyone exploring how to tackle these big issues.
They don't mind it being public. The more eyeballs the better. Here you go: https://substack.com/@ravenspeak1
Hi neighbor (just over the bridge in south NJ)! Looking forward to following you on your AI journey.
Hi Kristina! Thanks for reaching out. Looking forward to reading your thoughts as well, take good care!