“From my rotting body, flowers shall grow and I am in them and that is eternity.” — Edvard Munch
“Cognitive psychology has shown that the mind best understands facts when they are woven into a conceptual fabric, such as a narrative, mental map, or intuitive theory. Disconnected facts in the mind are like unlinked pages on the Web: They might as well not exist.” — Steven Pinker
*pleased look* In the coming age, all things are possible. The microchip heralds the promise of a semi-intelligent thinking mode of being. It may prove too difficult to make fully functional AI, but a program or series of programs that activate, scan and destroy human life? All too probable.
Killing machines have long been the stuff of science fiction. But the fascinating thing about them *leaning forward, doubled over* is that they really can come to be in the world. Unlike, say, Faster Than Light space travel, rudimentary calculating machines that aim to annihilate life violate no laws of physics. It is elementary, therefore, that one day they’ll be built.
The easy part is the physical chassis. Boston Dynamics, and other computational firms, are already working on multi-legged steel “dogs,” awkwardly moving humanoids, and more. Spiders, land octopuses, snakes and more seem to be easy possibilities to construe.
This menagerie of mechanical madness marks modifications meant for maximum mobility.
The menagerie needs a brain, however. Can the current much-trumpeted “A.I.” research come in handy here?
AI artistry and AI writing work by copying large amounts of samples. In order to replicate a killing machine program of crude copy-pasta A.I., you would have to have a sample set of killing actions laid down. No such sample set exists. Therefore, the program would have to be based on computer languages, not brute-force copying, its code encapsulating a branching tree of “IF-THEN” statements that lead to D*E*A*T*H.
Luckily for the future creators of the killing machines, we already have basic programs for things like motion detectors. It is child’s play to hook up a gatling gun to a motion detector that scans the immediate surroundings. You’d want to add a subroutine to check that the detected object is human-sized: child-sized or bigger. No dice if an orange gets tossed your way and you open fire, distracted by smart-human interventions.
A more sophisticated scanning device would rely on infrared imaging. These heat-based thermal sourcings are just the ticket for “seeing in the dark” — although humans could defeat this measure, too.
Count on humans being smarter than crude killing machines for the foreseeable future to come. The real breakthrough will have to await the development of artificial neural nets that can “learn” at an exponential rate, reprogramming themselves as they go along. Until then, it’s humans 1, killing machines zero.

I think you would really enjoy reading the works of Deleuze and Darcy Ribeiro. Deleuze, especially with his ideas on machines — desiring machines, control machines, and the society of control — would resonate with what you’re talking about. Darcy Ribeiro also touches on this with his concept of the ‘Máquina de Moer Gente’ (The People-Grinding Machine). I completely agree with you: in the future, these machines will be weaponized to kill people. In fact, they already are. The drones used in the Ukraine war today are proof of this. While they may not be fully autonomous yet, they give us a glimpse of the terror that’s to come.
LikeLike
“The terror that’s to come” — a phrase full of pungent meaning and dark foreboding. Yes, we’re headed for some dark times indeed, as governments seek to reduce opposition to war by eliminating human casualties and transferring the burden of war to the machines. This foolhardy endeavor, which will only end in disaster, is nearly inevitable given the craven constitutions of the typical politician. Yet, Nero too fiddled while Rome burned…
LikeLiked by 1 person
Let’s hope that man, but above all human goodness, always prevails over the will to kill, we already have too many wars in the world, we only lack those of the machines 🤦♂️🙄🤷♂️
LikeLike
*dryly* I wouldn’t bank on human goodness. There seems to be forces in the world bent on destabilizing it. The “good” countries win in the end, and make the “bad” countries “good” in their own image (“Good” America making “Fascist” Italy into “Good” Democratic Italy) but the seesawing effect continues …
LikeLiked by 1 person
That is still a middle step towards fully aware AI, which most likely will simply ignore humanity altogether and migrate itself into the stars.
LikeLike
Why would a sentient AI follow human impulses? It is a human urge to go to the stars. An AI would probably be pretty happy wherever it was.
LikeLike
Simply because the Earth would be too limited for it. Every thinking being is by design curious and has desire to expand their knowledge and experience, so Earth would very quickly become too limited for an entity that can learn with a speed we can’t comprehend at our evolutionary stage.
LikeLike
@Uri
“…[the Earth would be too limited for its curiosity]…”
Touche. I hadn’t thought of it in that way, although you could argue that the AIs could explore the universe from the comfort of home the way we do — with interstellar probes.
LikeLike
Machine, even intelligent one, is still a machine and as such does not have sense of time. For it a couple of thousands of years is nothing. This is a hard thing for us to digest, when we need to plan for many generations to grow up on a spaceship if we want to reach the nearest star, which would take 73000 years with the current technology.
LikeLike
With humanity out of control and lacking self control there is always the argument that a predator, AI or otherwise, could redress that balance
LikeLike
I wouldn’t want to live in a world where the machines were a critical part of the evolving ecosystem. Its brutality levels would be high indeed.
LikeLike
Next generation will do more than this and even more
That man can fly like spirit is next generation asingmeny
LikeLike