Welcome to our New Forums!

Our forums have been upgraded and expanded!

Welcome to Our New Forums

  • Our forums have been upgraded! You can read about this HERE

Artificial Intelligence and Emotions

FancyMancy

Well-known member
Joined
Sep 20, 2017
Messages
6,737
  • Artificial Emotional Intelligence - What Happens When an AI Knows How You Feel?
  • Artificial Intelligence Emotions - Has Artificial Intelligence Outsmarted Our Emotions?

What Happens When an AI Knows How You Feel?
Technology used to only deliver our messages. Now it wants to write them for us by understanding our emotions.

In May 2021, Twitter, a platform notorious for abuse and hot-headedness, rolled out a “prompts” feature that suggests users think twice before sending a tweet. The following month, Facebook announced AI “conflict alerts” for groups, so that admins can take action where there may be “contentious or unhealthy conversations taking place.” Email and messaging smart-replies finish billions of sentences for us every day. Amazon’s Halo, launched in 2020, is a fitness band that monitors the tone of your voice. Wellness is no longer just the tracking of a heartbeat or the counting of steps, but the way we come across to those around us. Algorithmic therapeutic tools are being developed to predict and prevent negative behavior.

Jeff Hancock, a professor of communication at Stanford University, defines AI-mediated communication as when “an intelligent agent operates on behalf of a communicator by modifying, augmenting, or generating messages to accomplish communication goals.” This technology, he says, is already deployed at scale.

Beneath it all is a burgeoning belief that our relationships are just a nudge away from perfection. Since the start of the pandemic, more of our relationships depend on computer-mediated channels. Amid a churning ocean of online spats, toxic Slack messages, and infinite Zoom, could algorithms help us be nicer to each other? Can an app read our feelings better than we can? Or does outsourcing our communications to AI chip away at what makes a human relationship human?

Coding Co-Parenting
You could say that Jai Kissoon grew up in the family court system. Or, at least, around it. His mother, Kathleen Kissoon, was a family law attorney, and when he was a teenager he’d hang out at her office in Minneapolis, Minnesota, and help collate documents. This was a time before “fancy copy machines,” and while Kissoon shuffled through the endless stacks of paper that flutter through the corridors of a law firm, he’d overhear stories about the many ways families could fall apart.

In that sense, not much has changed for Kissoon, who is cofounder of OurFamilyWizard, a scheduling and communication tool for divorced and co-parenting couples that launched in 2001. It was Kathleen’s concept, while Jai developed the business plan, initially launching OurFamilyWizard as a website. It soon caught the attention of those working in the legal system, including Judge James Swenson, who ran a pilot program with the platform at the family court in Hennepin County, Minneapolis, in 2003. The project took 40 of what Kissoon says were the “most hardcore families,” set them up on the platform—and “they disappeared from the court system.” When someone eventually did end up in court—two years later—it was after a parent had stopped using it.

Two decades on, OurFamilyWizard has been used by around a million people and gained court approval across the US. In 2015 it launched in the UK and a year later in Australia. It’s now in 75 countries; similar products include coParenter, Cozi, Amicable, and TalkingParents. Brian Karpf, secretary of the American Bar Association, Family Law Section, says that many lawyers now recommend co-parenting apps as standard practice, especially when they want to have a “chilling effect” on how a couple communicates. These apps can be a deterrent for harassment and their use in communications can be court-ordered.

In a bid to encourage civility, AI has become an increasingly prominent feature. OurFamilyWizard has a “ToneMeter” function that uses sentiment analysis to monitor messages sent on the app— “something to give a yield sign,” says Kissoon. Sentiment analysis is a subset of natural language processing, the analysis of human speech. Trained on vast language databases, these algorithms break down text and score it for sentiment and emotion based on the words and phrases it contains. In the case of the ToneMeter, if an emotionally charged phrase is detected in a message, a set of signal-strength bars will go red and the problem words are flagged. “It’s your fault that we were late,” for example, could be flagged as “aggressive.” Other phrases could be flagged as being “humiliating” or “upsetting.” It’s up to the user if they still want to hit send.

ToneMeter was originally used in the messaging service, but is now being coded for all points of exchange between parents in the app. Shane Helget, chief product officer, says that soon it will not only discourage negative communication, but encourage positive language too. He is gathering insights from a vast array of interactions with a view that the app could be used to proactively nudge parents to behave positively toward each other beyond regular conversations. There could be reminders to communicate schedules in advance, or offer to swap dates for birthdays or holidays—gestures that may not be required but could be well-received.

CoParenter, which launched in 2019, also uses sentiment analysis. Parents negotiate via text and a warning pops up if a message is too hostile—much like a human mediator might shush their client. If the system does not lead to an agreement, there is the option to bring a human into the chat.

Deferring to an app for such emotionally fraught negotiations is not without issues. Kissoon was conscious not to allow the ToneMeter to score parents on how positive or negative they seem, and Karpf says he has seen a definite effect on users’ behavior. “​​The communications become more robotic,” he says. “You’re now writing for an audience, right?”

Co-parenting apps might be able to help steer a problem relationship, but they can’t solve it. Sometimes, they can make it worse. Karpf says some parents weaponize the app and send “bait” messages to wind up their spouse and goad them into sending a problem message: “A jerk parent is always going to be a jerk parent”. Kisson recalls a conversation he had with a judge when he launched the pilot program. “The thing to remember about tools is that I can give you a screwdriver and you can fix a bunch of stuff with it,” the judge said. “Or you can go poke yourself in the eye.”

Computer Says Hug
In 2017, Adela Timmons was a doctoral student in psychology undertaking a clinical internship at UC San Francisco and San Francisco General Hospital, where she worked with families that had young children from low-income backgrounds who had been exposed to trauma. While there, she noticed a pattern emerging: Patients would make progress in therapy only for it to be lost in the chaos of everyday life between sessions. She believed technology could “bridge the gap between the therapist’s room and the real world” and saw the potential for wearable tech that could intervene just at the moment a problem is unfolding.

In the field, this is a “Just in Time Adaptive Intervention.” In theory, it’s like having a therapist ready to whisper in your ear when an emotional alarm bell rings. “But to do this effectively,” says Timmons, now director of the Technological Interventions for Ecological Systems (TIES) Lab at Florida International University, “you have to sense behaviors of interest, or detect them remotely.”

Timmons’ research, which involves building computational models of human behavior, is focused on creating algorithms that can effectively predict behavior in couples and families. Initially she focused on couples. For one study, researchers wired up 34 young couples with wrist and chest monitors and tracked body temperature, heartbeat and perspiration. They also gave them smartphones that listened in on their conversations. By cross-referencing this data with hourly surveys in which the couples described their emotional state and any arguments they had, Timmons and her team developed models to determine when a couple had a high chance of fighting. Trigger factors would be a high heart rate, frequent use of words like “you,” and contextual elements, such as the time of day or the amount of light in a room. “There isn’t one single variable that counts as a strong indicator of an inevitable row,” Timmons explains (though driving in LA traffic was one major factor), “but when you have a lot of different pieces of information that are used in a model, in combination, you can get closer to having accuracy levels for an algorithm that would really work in the real world.”

Timmons is expanding on these models to look at family dynamics, with a focus on improving bonds between parents and children. TIES is developing mobile apps that aim to passively sense positive interactions using smartphones, Fitbits, and Apple Watches (the idea is that it should be workable with existing consumer technology). First, the data is collected—predominantly heart rate, tone of voice, and language. The hardware also senses physical activity and whether the parent and child are together or apart.

In the couples’ study, the algorithm was 86 percent accurate at detecting conflict and was able to generate a correlation with self-reported emotional states. In a family context, the hope is that by detecting these states the app will be able to actively intervene. “It might be a prompt, like ‘go give your child a hug’ or ‘tell your child something he or she did well today,’” says Timmons. “We’re also working on algorithms that can detect negative states and then send interventions to help the parent regulate their emotion. We know that when a parent’s emotion is regulated, things tend to go better.”

Contextual information helps improve prediction rates: Has the person slept well the night before? Have they exercised that day? Prompts could take the form of a suggestion to meditate, try a breathing exercise, or engage with some cognitive behavioral therapy techniques. Mindfulness apps already exist, but these rely on the user remembering to use it at a moment when they are likely to be angry, upset, or emotionally overwhelmed. “It’s actually in those moments where you’re least able to pull on your cognitive resources,” says Timmons. “The hope is that we can meet the person halfway by alerting them to the moment that they need to use those skills.” From her experience working with families, the traditional structure of therapy—50-minute sessions once a week—is not necessarily the most effective way to make an impact. “I think the field is starting to take more of an explicit interest in whether we can expand the science of psychological intervention.”

The work is supported by a grant from the National Institutes of Health and National Science Foundation as part of a fund to create technology systems that are commercially viable, and Timmons hopes the research will lead to psychological health care that is accessible, scalable, and sustainable. Once her lab has the data to prove it is effective and safe for families—and does not cause unexpected harm—then decisions will need to be made about how such technology could be deployed.

As data-driven health care expands, privacy is a concern. Apple is the latest major tech company to expand into this space; it is partway through a three-year study with researchers at UCLA, launched in 2020, to establish if iPhones and Apple Watches could detect—and, ultimately, predict and intervene in—cases of depression and mood disorders. Data will be collected from the iPhone’s camera and audio sensors, as well as the user’s movements and even the way they type on their device. Apple intends to protect user data by having the algorithm on the phone itself, with nothing sent to its servers.

At the TIES lab, Timmons says that no data is sold or shared, except in instances relating to harm or abuse. She believes it is important that the scientists developing these technologies think about possible misuses: “It’s the joint responsibility of the scientific community with lawmakers and the public to establish the acceptable limits and bounds within this space.”

The next step is to test the models in real time to see if they are effective and whether prompts from a mobile phone actually lead to meaningful behavioral change. “We have a lot of good reasons and theories to think that would be a really powerful mechanism of intervention,” Timmons says. “We just don’t yet know how well they work in the real world.”

An X-Ray for Relationships
The idea that sensors and algorithms can make sense of the complexities of human interaction is not new. For relationship psychologist John Gottman, love has always been a numbers game. Since the 1970s, he has been trying to quantify and analyze the alchemy of relationships.

Gottman conducted studies on couples, most famously at the “Love Lab,” a research center at the University of Washington that he established in the 1980s. A version of the Love Lab still operates today at the Gottman Institute in Seattle, founded with his wife, Julie Gottman, a fellow psychologist, in 1996. In rom-com terms, the Love Lab is like the opening sequence of When Harry Met Sally spliced with the scene in Meet the Parents when Robert De Niro hooks his future son-in-law up to a lie detector test. People were wired up two by two and asked to talk between themselves—first about their relationship history, then about a conflict—while various pieces of machinery tracked their pulse, perspiration, tone of voice, and how much they fidgeted in their chair. In a back room filled with monitors, every facial expression was coded by trained operators. The Love Lab aimed to collect data on how couples interact and convey their feelings.

This research led to the “Gottman method,” a relationship-counseling methodology. It’s important to maintain a 5:1 ratio of positive to negative interactions; that a 33 percent failure to respond to a partner’s bid for attention equates to a “disaster”; and that eye-rolls are strongly correlated with marital doom. “Relationships aren’t that complicated,” John Gottman says, speaking from his home in Orcas Island, Washington.

The Gottmans too, are stepping into the AI realm. In 2018, they founded a startup, Affective Software, to create an online platform for relationship assessment and guidance. It started from an IRL interaction; a friendship that was sparked many years ago when Julie Gottman met Rafael Lisitsa, a Microsoft veteran, as they collected their daughters at the school gates. Lisitsa, the cofounder and CEO of Affective Software, is developing a virtual version of the Love Lab, in which couples can have the same “x-ray” diagnosis of their relationship delivered via the camera on their computer, iPhone, or tablet. Again, facial expressions and tone of voice are monitored, as well as heart rate. It’s an indicator of how far emotion detecting, or “affective computing” has come; though the original Love Lab was backed up by screens and devices, ultimately it took a specially trained individual to watch the monitor and correctly code each cue. Gottman never believed the human element could be removed. “There were very few people who can actually really sensitively code emotion,” he says. “They had to be musical. They had to have some experience with theatre ... I never dreamed a machine would be able to do that.”

Not everyone is convinced that machines can do this. Emotion-detecting AI is choppy territory. It is largely built on the idea that humans have universal expressions of emotions—a theory developed in the 1960s and ’70s with observations by Paul Ekman, who created a facial expression coding system that informs the Gottmans’ work and forms the basis of much affective computing software. Some researchers, such as Northeastern University psychologist Lisa Feldman Barrett, have questioned whether it is possible to reliably detect emotion from a facial expression. And though already widely used, some facial recognition software has shown evidence of racial bias; one study that compared two mainstream programs found they assigned more negative emotions to Black faces than white ones. Gottman says the virtual Love Lab is trained on facial datasets that include all skin types and his system for coding interactions has been tested across different groups in the US, including African American and Asian American groups. “We know culture really does moderate the way people express or mask emotions,” he says. “We’ve looked in Australia, the UK, South Korea, and Turkey. And it seems like the specific affect system I’ve evolved really does work. Now, will it work in all cultures? We really don’t know.”

Gottman adds that the Love Lab really operates by means of a social coding system; by taking in the subject matter of the conversation, tone of voice, body language, and expressions, it is less focused on detecting a singular emotion in the moment and instead analyzes the overall qualities of an interaction. Put these together, says Gottman, and you can more reliably come up with a category like anger, sadness, disgust, contempt. When a couple takes part, they are invited to answer a detailed questionnaire, then record two 10-minute conversations. One is a discussion about the past week; the other is about a conflict. After uploading the videos the couple rate their emotional state during different stages of the conversation, from 1 (very negative) to 10 (very positive). The app then analyzes this, along with the detected cues, and provides results including a positive-to-negative ratio, a trust metric, and prevalence of the dreaded “Four Horsemen of the Apocalypse”’: criticism, defensiveness, contempt, and stonewalling. It is intended to be used in conjunction with a therapist.

Therapy and mental health services are increasingly provided through video calls—since the pandemic, this shift has been supercharged. Venture capital investment in virtual care and digital health has tripled since Covid-19, according to analysts at McKinsey, and AI therapy chatbots, such as Woebot, are going mainstream. Relationship counseling apps such as Lasting are already based on the Gottman method and send notifications to remind users to, for example, tell their partner that they love them. One could imagine this making us lazy, but the Gottmans see it as an educational process—arming us with tools that will eventually become second nature. The team is already thinking about a simplified version that could be used independently of a therapist.

For the Gottmans, who were inspired by the fact that so many couples are stuck on their smartphones anyway, technology opens up a way to democratize counseling. “People are becoming much more comfortable with technology as a language,” says Gottman. “And as a tool to improve their lives in all kinds of ways.”

Email for You, but Not by You
This technology is already everywhere. It could be impacting your relationships without you noticing. Take Gmail’s Smart Reply—those suggestions of how you may respond to an email—and Smart Compose, which offers to finish your sentences. Smart Reply was added as a mobile feature in 2015, Smart Compose rolled out in 2018; both are powered by neural networks.

Jess Hohenstein, a PhD researcher at Cornell University, first encountered Smart Reply when Google Allo, the now-defunct messaging app, was launched in 2016. It featured a virtual assistant that generated reply suggestions. She found it creepy: “I didn’t want some algorithm influencing my speaking patterns, but I thought this had to be having an effect.”

In 2019, she ran studies that found that AI is indeed changing the way we interact and relate to each other. In one study using Google Allo, 113 college students were asked to complete a task with a partner where one, both, or neither of them were able to use Smart Reply. Afterwards, the participants were asked how much they attributed the success or failure of the task on the other person (or AI) in the conversation. A second study focused on linguistic effects; how people responded to positive or negative “smart” replies.

Hohenstein found that the language people used with Smart Reply skewed toward the positive. People were more likely to roll with a positive suggestion than a negative one—participants also often found themselves in a situation where they wanted to disagree, but were only offered expressions of agreement. The effect is to make a conversation go faster and more smoothly— Hohenstein noticed that it made people in the conversation feel better about one another too.

Hohenstein thinks that this could become counterproductive in professional relationships: This technology (combined with our own suggestibility) could discourage us from challenging someone, or disagreeing at all. In making our communication more efficient, AI could also drum our true feelings out of it, reducing exchanges to bouncing “love it!” and “sounds good!” back at each other. For people in the workplace who have traditionally found it harder to speak up, this could add to the disincentive to do so.

In the task-completion study, Hohenstein found that, the humans took credit for positive outcomes. When something went wrong, the AI was blamed. In doing so, the algorithm protected the human relationship and provided a buffer for our own failings. It raises a deeper question of transparency: should it be revealed that an AI has helped craft a response? When a partner was using Smart Reply, it initially made the receiver feel more positive about the other person. But when told that an AI was involved, they felt uncomfortable.

This underpins a paradox that runs through the use of such technology—perception and reality are not aligned. “People are creeped out by it, but it’s improving interpersonal perceptions of the people you’re communicating with,” says Hohenstein. “It’s counterintuitive.”

In his paper, Hancock highlights how these tools “may have widespread social impacts” and outlines a research agenda to address a technological revolution that has happened under our noses. AI-mediated communication could transform the way we speak, mitigate bias, or exacerbate it. It could leave us wondering who we’re really speaking to. It could even change our self-perception. “If AI modifies a sender’s messages to be more positive, more funny, or extroverted, will the sender’s self-perception shift towards being more positive, funny, or extroverted?” he writes. If AI takes over too much of our relationships, then what are we really left with?

https://archive.is/I4CIh



This article appears to have been done in about 2014 or so.
Has Artificial Intelligence Outsmarted Our Emotions?
The history of our relationship with technology is simple: we purchased machines and devices that we expected to fulfill a certain need. Be it a computer for sending emails, an e-reader for reading books on the go, or a smartwatch for helping us stay on top of notifications, we interact with technology with predictable reciprocity. This relationship, however, is starting to shift. As devices become artificially intelligent, it seems we’ve reached a critical new phase where we are striving to please our gadgets.

In today’s increasingly competitive technology world, it’s imperative that companies recognize the connections we’re forming with “smart” inanimate objects, and figure out ways to develop products accordingly – because the devices that will prevail are those that not only please us, but those that we also hope to please.

We Can, and Do, Become Emotionally Intertwined With Tech
Odds are, you’ve experienced leaving the house without your phone and felt some anxiety. And you likely thought to yourself — how crazy is it that I’ve become this reliant on my phone? Crazy though it may be, science shows that we crave interaction with technology — we check our phones 150 times a day. Why? Not just because we’re scared to miss an important email, but because endorphins are released when we check our social media. Typically triggered by positive human-to-human interactions, exercise, and eating a satisfying meal, endorphins are now being released after technology-based interactions.

Not all reactions to technological stimuli are subconscious, though. Some people seek out the distinctly pleasurable feelings they get from non-human sources.

A recent article in The New York Times describes a phenomenon called autonomous sensory meridian response, or A.S.M.R., designed to evoke a tingling sensation that travels over the scalp or other parts of the body in response to auditory, olfactory or visual forms of stimulation. Today, hundreds of self-described ‘ASMRtists’ cater to their tingle-seeking viewers on YouTube channels, cooing “I love you” and other human sentiments into the camera otherwise reserved for those we’ve actually met. Many viewers have reported being unable to fall asleep
Code:
https://archive.is/0WlHH
without watching these ASMR videos, exemplifying this emerging dependency on a machine for basic behavior.

As Tech Gets Smarter, We Evolve From Skepticism and Trust to Validation
As humans, we have an innate desire to satisfy people that we hold in high esteem. As for inanimate objects, we generally purchase products with the expectation of satisfaction. But not anymore. Let’s take a look at GPS technology — remember when we thought of it as an annoying series of disembodied directional dictates, famously spoofed in a 2009 episode of SNL? This aversion to GPS has undoubtedly evolved to trust, becoming a source of comfort when navigating
Code:
https://archive.is/ckloT
unfamiliar territory. When’s the last time you ventured somewhere new without the help of mapping technology?

Going beyond just trust, we’re starting to engage in relationships with devices where we strive to reach goals and produce results that please the technology. Driving app Dash motivates you to be a better driver, offering real-time feedback to improve driving and efficiency. Productivity app Carrot is known as a “sadistic to-do list” that reminds you to do your tasks and gets mad at you when you don’t get things done. Smart Alarm Clock and Withings analyze your sleeping patterns and make smart recommendations about what you can do to improve them and be as well-rested as possible.

Now, tech-derived validation is permeating into everyday aspects of our lives with artificial intelligence. First we saw accelerometer-based fitness trackers like Fitbit and Jawbone UP that would push people to hit 10,000 steps per day. When people didn’t hit that goal, they often felt disappointed in themselves. Taken a step further, my own company Moov, an artificially intelligent fitness coach that trains people the way a personal trainer would, pushes users by giving them real-time feedback while they’re working out. By creating a tough, caring, supportive, and hard to impress coach character, we have humanized the experience and users develop a relationship with the device in the same way they would with a personal trainer. They work harder, in part, because they want to please the Moov coach
Code:
https://archive.is/nSTWX
.

No matter the technology, these apps and devices all have one thing in common: helping people set and achieve goals through computer efficiency meshed with human-like feedback.

Know Thy Robot: Recognize and Respect When Too-Human Turns Too-Creepy
Research has shown that people prefer robots that seem capable of conveying at least some degree of human emotion
Code:
https://archive.is/pmLPD
. Robots with facial features, voice interaction, and mimicking human-like gestures, were found to be much preferred over robots that did not display any of these qualities.

There is a fine line, however, between arousing human emotion and revolting it when it comes to artificial intelligence. The Uncanny Valley
Code:
https://www.youtube.com/watch?v=9K1Kd9mZL8g
hypothesis purports that people are repulsed by robots that look and move almost (but not quite) human-like. The lesson? Like with most things, balance is crucial. Adding human characteristics through AI design is only appealing so long as the technology maintains its honest, outwardly robotic qualities.

Companies that are now household names got this right. Apple’s Siri, for instance, is arguably the most well-known artificially intelligent personal assistant thanks to its human-like speech patterns sounding from its visibly non-human source. Then there’s the wildly successful Jibo, marketed as a family robot that aims to humanize technology. Jibo plays with the kids, reminds parents of calendar appointments, knows when to take pictures of important family moments, and acts as your personal assistant while still looking like the mechanical being that it is. Although they’ve yet to ship the product, there’s clearly a desire for this type of technology — they raised almost $2.3 million on Indiegogo.

Software and hardware are both taking on human tendencies and as a result, our relationship with technology is evolving. Including the subtle anthropomorphic cues like human voices helps establish a sense of trust, and eventually, an inner desire to please. As we continue to strive for validation from our devices, the emotional interplay between man and his man-made objects can only be achieved with the very real power of artificially intelligent technology.
https://archive.is/0232e
 
The enemy probably knows more than some of us about how the human mind works trends in society and maybe even how future events will play out or when they will happen and what will happen with this type of technology. It actually could be pretty accurate if they have a large enough sample of the population. It is not fool proof though of course AI algorithms can get stuff wrong or even be programmed wrong by people intentionally as a hack or prank.
 
I think this stupid emotional prediction or pre-emotional bullshit is just that bullshit. It's like greys they are nice and happy and loving to each other hostile to everything. It seems to me like this is some sort of greying technology. As cancerian I avoid all this crap for me this technology is just being manipulative and not solving the problem. It's a bandaid and dam technology. Eventually the dam will overflow and out comes the bullets. I think this is just the created problem of useless pointless people and useless pointless emotional underpinning most people posses due to generational goying. It's like they created this technology out of a problem that should have never been in the beginning IF anything this technology needs to be done in laboratories and scientific places whereby it's limited and not used. In simplest terms develop technology for technologies sake but to develop other technologies. Like the Military despite the Government is under zog occupation and they create and harness technology for control. Funny enough that very militaristic mindset of holding the technology in a sandbox is exactly what is needed. If anything this is un-sandboxing and normalization is going to create a Bezmenov destabilization.

Sheer fact is I'm getting kinda tired of technology. Mostly the bullshit from tech websites I visit. I've noticed an acceleration in pointless and evolving technology. It's one thing to have technology evolve it's another when every bit a piece of technology is made simply for shekelberging. For example just recently Nvidia is planning on at some point in the coming months releasing the 3090 Ti a side-upgrade to the 3090 from about a year or two ago. Funny it's nearly 4,000 dollars the MSRP is supposed to be like 1,200.

(I could be wrong on Logitech this article I remember but not the company) Or for example a few years ago about two or three years ago Logitech got backlashed by the community. In fact some people implored Logitech to Free Software or Open-Source a Logitech device. This device was a music player and controller. It was considered by some people such a good device. That when people heard Logitech updating it to a new version a new controller/music player. They freaked out only to hear from Logitech that they will release a new patch to the controller/music player to End of Life it i.e. brick it. If you update they brick the device so you MUST buy the updated version if you bricked the device. To which people said F all that.

It's like Smartphones I only got my smartphone in 2019 and I'm already sick and tired of it. I only use this former flagship which I regret should have gotten the iPhone 8. I only limited use it not even professionally or anything just simple things. I do buy proper things and it is considered a good smartphone. But funny enough it's like WTF is the point of it. Except textiing and talking to my family. It's a completely pointless device I mostly just use it for internet search from time to time and watching a youtube channel or two. Other than that it is not good technology. Funny even after just using it a two or three days it hits about 50ish % battery and I charge it back to 100. Even leaving it overnight it eats up 3-4% battery. It's 100% battery charged but over the sleep it eats up battery.

At the end of the day I think our technology is too limited and too into combat to do good. It's like the internet people ask why is everyone so hostile. It's like Tim Pool said a while back that he grew up on the forefront of the internet with all the texting and hostility and internet warfare. If the internet was originally a weapon system for the military a messaging system and even the people in the beginning used it for internet warfare and rule 34 and all these things. Funny enough you got multi-generations using the internet as a weapon.

Everything is a weapon basically these AI things are just that weapons. We certainly have the Martian combat the whole 10,000 years of non-stop warfare meme from history.

Funny if that is how Humanity has become no wonder there's people trying to cut a buck from said markets.

I'd like to see the Gods technology. Not necessarily just the military stuff though I'm sure that is cool. But the proper technology. Reminds me of road technology the U.S. built it's entire infrastructure to counter- a soviet penetration. Funny the highway system in LA is horribad. I wonder why that was created it's so bad it's created issues. Sounds to me like stupid humans with stupid limited information and not knowing about the future and knowing issues. Lack of information.

All these technologies is lack of information. Like a quote my father said "There is more tech, computer, texting, phones and whatnot than Humans like cars billions of cars and yet Humanity is more stupid, communized, bolshevished, eating shit, combat than ever".

And he is right technology understandably is wanting to do good for those who deem it good. But probably is gonna be used bad.

The path of quickest and least resistance the subversive destructive path. Sometimes I wonder how Humanity gets up in the morning and passes the day without blowing the shit up of each other.
 

Al Jilwah: Chapter IV

"It is my desire that all my followers unite in a bond of unity, lest those who are without prevail against them." - Satan

Back
Top