justanotherfan wrote:Awesome, yeah. Wiki says that's doctor.pl, an Emacs implementation of Eliza from 1985.
Just as an interesting nitpick, that's "doctor.
el"--
.pl is the standard extension for Perl scripts, while
.el stands for Emacs LISP. Much of Emacs' success comes from the fact that it is a platform and interpreter for a slightly modified version of LISP, a language commonly associated with artificial intelligence, coincidentally.
I think an instant messaging program, or at least an IRC program, had eliza chatbots implemented.
I'm pretty sure that all popular chat protocols have had the old AI packages piped into them at some point, but ICQ was the most famous for that sort of thing. As I recall, I made myself an account about 8 or 9 years ago so that I could stay in touch with Phasmatis, and within an hour of logging on, I was barraged with solicitations from a horde of automated bots. Apparently due to the high volume of competition, many of these bots forewent the normal spam modus operandi in favour of attempting to pose as an Internet Female in order to engage awkward nerdlings in brief patter before suggesting the user click a link to "her" "webcam". Of course, the link usually led to something virus or spyware-laden, so I guess it was something like a virtual STD.
In my computing textbook, it seemed like the latest thing for AI was compiling a comprehensive database about the world, in the hopes that comprehension and consciousness would miraculously spring forth from the seeding knowledge. Clever, but still something missing. I'll be intrigued with how Project Hyperion will revolutionize the field. Once Jonas creates a computer consciousness, it could evolve quickly enough to complete the rest of the game development.
Not surprisingly, your textbook appears to be quite out-of-date or misinformed. There is quite a bit of misinformation all over about current AI research and practices, partly due to how rapidly and drastically it has changed, as well as the undying appeal of the
AI presented in popular media. This
AI which exists as an entity in pop culture is congruent with what the founding fathers called "strong AI". That is, a computer system which believably emulates a typical human in its interactions with a human user. When computer technology was new and exploding with new innovations at an incredible rate, this seemed like the inevitable destination--a bit like the student becoming the master, I suppose.
{In the interests of clarity, I will be using the popular (yet incorrect) term "computer" to refer to a computer program or system cast in the role of an artificial agent}
Of course, what was not even considered was that a computer surpassing the raw intelligence of a human still acts like a computer. Arguably, the defining characteristic of the human mind is irrationality, the unique characteristic that causes one to think or do things which their current knowledge does not indicate (or indicates to be incorrect). While it is definitely not impossible to make a computer irrational, it is entirely another thing to make a computer irrational, yet ensure that its irrational actions are consistent with the actions of another irrational being. In order to do that, not only would the computer need to be rational (making spontaneity effectively infeasible), but it would need to deconstruct human irrationality rationally in order to reproduce it in a way that's rationally irrational (phew). If you want to have dry, user-driven dialogue with a program, that's fine, but it's Natural Language Processing, not strong AI. If you want to have a conversation where you forget you're talking to a computer, you're going to have to wait until we've figured out the mechanics of the human brain.
The reason why this stuck in pop culture is that when the field of artificial intelligence was conceived (with strong AI as its sole goal), it seemed like the pinnacle achievement of mankind (not to mention exciting in fiction), and thus, there was a great deal of excitement and hype surrounding its research. When year after year produced no significant results, funding was pulled, and even Minsky gave up, normal people went back to
AI as science fiction mode, which they were pretty much already in.
Of course, AI was revitalized a short time later, but this time with a largely different goal. Instead of trying to act like a human,
weak AIs are systems that have the ability to
discover information without the run-time assistance of a human. Certainly, there are still a few groups intent on achieving human emulation, but the kind of AI in video games today has less to do with thinking like a human and more with finding the right set of pre-scripted actions to take, paths to follow, and so forth, based on their environment. Many of the popular weak AI algorithms rely on pseudo-random elements in order to provide a starting "guess" solution, introduce variety, or simulate real-world entropy, but some of them use straightforward procedural heuristics, like the famous A-Star (or A*) pathfinding algorithm.
In truth, you probably interact with a wide variety of weak AI every day, such as Google Ads' selection algorithm, many of the standard features of Wordpress and Mediawiki, and even in the embedded systems of household appliances and other devices. While that is quite a bit more practical than having an expensive box that can emulate something that already exists, it's not nearly as exciting or perverse, so, much like do with Freud's contributions to science, we let other people think about the facts while we muse about more entertaining misinterpretations.
The above was an excerpt from my book
OiNutter Explains Artificial Intelligence, part of the
OiNutter Explains Shit collection. If you enjoyed the above passage, or at the very least, do not feel compelled to liken it to having nails driven into your eyes, make sure you check out the other books in the series, such as
OiNutter Explains Computers Vol. 39,
OiNutter Explains Freud's Fornication Frustration, and the best-seller
OiNutter Explains Consciousness.
Simpletext > Pico > Emacs
I'm not sure where you're coming from with that. While Emacs can indeed be called a text editor, I don't see how it's directly comparable to other text editors, since Emacs is, in essence, a unified workstation interface and platform. Anyone who spends less than 25% of their computing time in a CLI shell or terminal would very likely have no use for it, but for someone like me, who eats code, defecates binary, and sleeps on a stack, it's quite an essential workspace environment. Like Vi, Emacs allows you to execute all commands without your hands ever leaving the home keys, but it also has the ability to split the screen an arbitrary number of time, run a shell in one (or more) of them, edit binary files without corrupting the data, have the same file open at different places in an arbitrary number of screens, support language-specific text-highlighting/colouring/fontlock, turn menial tasks into macros, support modes like `hexl`, which turns a screen into a hex editor, etc., etc., etc.
Not trying to start a nerd-religion war here, I'm just wondering how Emacs ended up as an operand to a comparison operator when I can't even think of another program that occupies a comparable role...