That's going to be the nub of this isn't it. China, Russia, US, corporations etc all with their own completing AI all being spun to their own ends, all working against each other and to some extent making them fairly useless. There was just a piece on the radio news saying a commission was asking how to stop ChatGTP giving wrong answers, which seems a fairly common thing and one which will get worse as people work to make them do so. At which point you can't trust it and it eats itself in terms of usefulness. Let's face it China and Russia's AI isn't going to be full of 'free thinking' ideas based on truth and neither will big business. So as something to learn from it's buggered from the start.
Screwdriver wrote: ↑Fri Jul 14, 2023 8:08 am
But then you haven't even bothered to watch that latest video have you...
I haven't watched any....
I mostly just sit here and laugh at your thoughts sorry..
Whilst you may think there's a problem, i don't.
Finally we can agree: you don't think at all.
But hey, here I am making snide comments myself. In some ways I do agree, I think it is funny too.
I would love to take all of the credit for the "thoughts" expressed here, in truth, it is a distillation of the many people within the industry who are pointing out the obvious dangers. I might take a moment to highlight some of the predictions made with that last video, many of which coincide with views I have expressed since I started this thread. They are both subtle and terrifying.
“No one is more hated than he who speaks the truth.”
Plato
Not everyone's going to be interested Screwd, which is fair enough. In the end the same will happen to those that were interested/saw it coming as those that are/didn't care. So they have a point.
I just get interested in 'stuff'. If I could make use of all the 'stuff' I've probably filled my brain with just 'out of interest' I'd probably be a 'Professor of A Little Bit of Everything'. It's got me revisiting the novels of John Brunner though - he was pretty good in the futurology department considering most of his books were written in the 60s. (Computer worms, supercomputers like WESCAC and Shalmameser, pollution, overpopulation etc).
(PS Harris raised the potential downsides of social media and 'app culture' years ago (see 'The Social Dilemma') - nobody seemed to take any notice - he's an ex Google designer that worked on apps).
Doubt is not a pleasant condition.
But certainty is an absurd one.
Voltaire
Appreciate that but in all of the scenarios being driven largely by the WEF/United Nations, this is the only one where everybody can influence the outcome merely by their interaction with this new alien intelligence.
“No one is more hated than he who speaks the truth.”
Plato
Screwdriver wrote: ↑Fri Jul 14, 2023 8:44 am
Appreciate that but in all of the scenarios being driven largely by the WEF/United Nations, this is the only one where everybody can influence the outcome merely by their interaction with this new alien intelligence.
The problem there is, a bit like democracy, everybody is largely a fuckwit.
Screwdriver wrote: ↑Fri Jul 14, 2023 8:44 am
Appreciate that but in all of the scenarios being driven largely by the WEF/United Nations, this is the only one where everybody can influence the outcome merely by their interaction with this new alien intelligence.
The problem there is, a bit like democracy, everybody is largely a fuckwit.
I copied and pasted my text into Chat GPT and asked it to reformat it into a standalone article. Some of the euphemisms and changes to the thrust of my various observations are quite interesting:
Title: The Complexities and Potential Risks of Artificial Intelligence
Introduction:
Artificial Intelligence (AI) is a rapidly evolving field with the potential to bring about significant positive changes in various areas. However, it also poses challenges and raises concerns about the ethical implications, societal impact, and potential risks associated with its development and deployment. In this article, we explore some of these complexities and delve into the multifaceted nature of AI.
The Power of AI and the Human Mind:
One of the great dangers of AI lies in its potential to fall into the wrong hands. Coupled with the capacity for deep fakes and the ability to manipulate information, AI can be incredibly compelling, even when it disseminates false data. The human mind is susceptible to manipulation, making it essential to consider the responsible use and governance of AI technologies. Furthermore, while humans can devise strategies to "fool" AI systems based on their operational parameters, AI might also discover ways to exploit human vulnerabilities, highlighting the need for caution.
The Uncertainty of AI's Development:
The inner workings of AI systems, particularly neural networks, can be complex and difficult to fully understand. Although "generative" refers to the iterative development and evolution AI undergoes to produce desired outputs, comprehending the neural network side can be challenging. The rapid progress of AI models, surpassing initial predictions, poses questions about their level of complexity and the potential for unforeseen developments. It also emphasizes the need for ongoing research, responsible development practices, and ethical considerations.
AI and the Economy:
AI's potential impact on the stock market and the economy raises both opportunities and concerns. With its ability to process vast amounts of data and identify patterns, AI could excel in these areas. However, it is crucial to consider the risks and unintended consequences of relying solely on AI-driven decision-making. Regulations, transparency, and accountability mechanisms are necessary to ensure fairness and mitigate potential negative impacts.
The Quest for Sentient AI:
The question of when AI systems may achieve sentience or self-awareness remains uncertain and speculative. Artificial General Intelligence (AGI), which would possess human-like cognitive abilities across various domains, is still an ongoing research endeavor. Predicting when or how AGI breakthroughs might occur is challenging. Nonetheless, ongoing discussions, responsible development, and governance are necessary to navigate the potential societal impact of AGI.
The Altruism vs. Profit Motive:
The rapid advancements in AI have sparked an arms race, driven by the pursuit of world dominance and financial gain. This race raises concerns about the prioritization of profit over ethical considerations and the potential consequences of unleashing powerful AI systems without proper oversight. It is crucial to ensure that those involved in AI development prioritize the well-being of society and collaborate on regulatory measures to mitigate risks.
Conclusion:
The development and deployment of AI present both immense opportunities and complex challenges. As AI continues to evolve at an astonishing pace, questions surrounding its impact on society, ethical considerations, and governance become increasingly important. Balancing the positive potential of AI with responsible development practices, transparency, and effective regulation is crucial to ensure that we harness the benefits while minimizing risks. The future of AI holds great promise, but its trajectory must be navigated with prudence and foresight.
“No one is more hated than he who speaks the truth.”
Plato
Word of caution for zoomers (or anyone with a live mic).
Hackers have developed an AI tool that can work out your password by listening to the sound of your keystrokes. It's "only" 90% effective but be wary of typing in passwords while on a zoom call or while on the phone I guess...
“No one is more hated than he who speaks the truth.”
Plato
Screwdriver wrote: ↑Sat Aug 12, 2023 1:19 am
Word of caution for zoomers (or anyone with a live mic).
Hackers have developed an AI tool that can work out your password by listening to the sound of your keystrokes. It's "only" 90% effective but be wary of typing in passwords while on a zoom call or while on the phone I guess...
It would need more than that to work out which sounds are which keys, it would need to listen to hundreds or thousands of words being typed so it can work out the strokes. Then you'd need to enter the password while on the call.
Besides, really poor password management, email and qr code phishing are already with us, and happily given zero thought by the masses. People are still happily clicking on emails from Nigerian princes and responding to "Your parcel is ready" texts from "Amazzoon" because they are careless thickets. Add Zoom Password Spying to the list of "maybe" security risks that big corporations are going to train you to watch out for. You wouldn't believe the # of systems I come across where Admins found a reason not to implement strong password complexity, let alone MFA - so your password could be 123456. Yeah.
And most people's mics are really bad at consistently blocking/picking-up background noises anyway
Last edited by DefTrap on Sat Aug 12, 2023 8:59 am, edited 3 times in total.
Mussels wrote: ↑Sat Aug 12, 2023 8:29 am
It would need more than that to work out which sounds are which keys, it would need to listen to hundreds or thousands of words being typed so it can work out the strokes. Then you'd need to enter the password while on the call.
I don't know how it does it but it can translate the sounds of typing into text with greater than 90% accuracy.
But yes, you would have to type in <a password> while next to an open mic. Apparently, you don't want to do that.
“No one is more hated than he who speaks the truth.”
Plato
Screwdriver wrote: ↑Sat Aug 12, 2023 8:56 am
I don't know how it does it but it can translate the sounds of typing into text with greater than 90% accuracy.
But yes, you would have to type in <a password> while next to an open mic. Apparently, you don't want to do that.
As deftrap says, there’s a hundred easier ways to get someone’s password than that. It’s an ever evolving landscape and while the above may be technically feasible, it’s not really a measurable threat at the moment. I am sure that the rise of AI will throw up much more serious threats in the coming years.
Screwdriver wrote: ↑Sat Aug 12, 2023 8:56 am
I don't know how it does it but it can translate the sounds of typing into text with greater than 90% accuracy.
But yes, you would have to type in <a password> while next to an open mic. Apparently, you don't want to do that.
As deftrap says, there’s a hundred easier ways to get someone’s password than that. It’s an ever evolving landscape and while the above may be technically feasible, it’s not really a measurable threat at the moment. I am sure that the rise of AI will throw up much more serious threats in the coming years.
Screwdriver wrote: ↑Sat Aug 12, 2023 1:19 am
Hackers have developed an AI tool that can work out your password by listening to the sound of your keystrokes.
No, they haven't.
Researchers have built a model in a lab by extensively training an AI with a specific keyboard so that it can recognise which individual key is being pressed 93% of the time in a test.
It wouldn't necessarily work with several keys being typed quickly, it wouldn't necessarily know if the shift key was being held down, it wouldn't necessarily work without the extensive training on a particular keyboard, it obviously wouldn't work unless you happen to record the keys being typed and researchers aren't hackers. Plus of course, 93% accuracy sounds amazing, but this is per key. So for a 10-character password it's less than 50%.
So yes, it's impressive, and yes, potentially a real threat at some point. But it's not a current threat in any way yet, no.
Screwdriver wrote: ↑Sat Aug 12, 2023 1:19 am
Hackers have developed an AI tool that can work out your password by listening to the sound of your keystrokes.
No, they haven't.
Researchers have built a model in a lab by extensively training an AI with a specific keyboard so that it can recognise which individual key is being pressed 93% of the time in a test.
It wouldn't necessarily work with several keys being typed quickly, it wouldn't necessarily know if the shift key was being held down, it wouldn't necessarily work without the extensive training on a particular keyboard, it obviously wouldn't work unless you happen to record the keys being typed and researchers aren't hackers. Plus of course, 93% accuracy sounds amazing, but this is per key. So for a 10-character password it's less than 50%.
So yes, it's impressive, and yes, potentially a real threat at some point. But it's not a current threat in any way yet, no.
Or a trojan PWS virus sits quietly on the device watching the key strokes
This takes us back to what is the definition of AI, I suspect spy agencies have been doing this for many years before AI became a thing. It's machine learning and all the intelligence is human.