Why Speech-enable Emacs?
From: [email protected] (Bryan Smart)
To: raman
Subject: Emaspeak
Date: Thu, 27 Apr 95 21:39:07 EDT
I've been trying to get enough people working on a speech interface for
Linux at the same time for a while, and everyone always gets other
projects, school, etc and drops out.
We were going to write a patch for the TTY driver so the local console
would be interactive and the blind user could make use of all of the vts
etc. You're system just is speaking emacs? I just never got in to Emacs.
Only Emacs I've used for any length of time is Micro-Emacs. Actually as
far as editors go I usually use Pico. I know Emacs has a wonderful
scripting blah blah blah, but why didn't you put together a speech
interface for Linux. Everything would talk to some extent then.
If there is a good reason, then I'd be glad to hear it. From reading your
post, I estimate you know what you're talking about. Yet the interface is
strictly for Emacs. Hmmmm.
Bryan Smart
Email: [email protected]
WWW: http://www.cris.com/~bsmart
From: raman
To: [email protected] (Bryan Smart)
Subject: Emaspeak
Date: Thu, 27 Apr 1995 22:24:54 -0400
Cc: raman
Hi Brian,
I'm writing a detailed reply because you sound like you've thought of the
problem of access to Linux; hopefully you'll be enthused enough by emacspeak
to make it even better.
I had a very good reason for writing emacspeak on top of emacs.
1) I started this off as something that I would get working in a couple of
weeks, which I did.
(this was last October)
2) Emacs, as you allude (this is full gnu emacs in all its garbage collecting
glory) is extremely powerful and not to be compared with micro-emacs.
3) From emacs I can do everything, including run a subshell etc.
4) With the forthcoming eterm (terminal emulator under emacs) which emacspeak
already works with, you basically get everything you would if you wrote a general
purpose speech output at the linux tty driver level and more.
1) How you get the same:
With the eterm terminal emulator under emacs, I can run vi, rn and a host of
other shell programs, including telnet sessions and kermit sessions when
logging into other machines. And everything talks
2) How you get more:
I'll preach a little here, apologies in advance.
When you take a tty driver and make it speak, (this is essentially what all PC
screenreaders under DOS do) all you get to hear is the contents of the
display; you're responsible for figuring out why it's there.
So for instance, when a calendar application lays out the calendar to produce
a well-formatted tabular display, it looks nice; but the blind user hears
"1 2 3 4 5 6 7 2 3 4 5 6 "... or some such garbage; Believe me; I've used
such an interface for the last five years.
So now you've got to figure out that for instance 27 april is a thursday by
checking which screen column the figure "27" appears in.
Emacspeak has a completely different approach to speech enabling Emacs apps
(which as you know are numerous). Emacspeak looks at the program environment
and data of the applications, and speaks the information the way it should be
spoken. So in the case of the calendar, you hear "Thursday, April 27, 1995".
So in summary:
1) Emacspeak does much better at providing speech output for applications
written for Emacs, e.g. the emacs calendar, gnus, W3 ...
2) In the case of applications running at the shell level, ie non-emacs apps,
emacspeak provides the same level of speech output as do Dos-based
screenreaders or a tty-based screenreader would.
All this said, there is one small short-coming; you dont get speech till emacs
has started.
Here is how I have things setup on my laptop; if you have suggestions I'd
welcome them.
At present, I have set up lilo to beep once when it prompts for dos or linux;
if I don't touch the keyboard it boots to linux, and gives me a double beep.
(this tells me I have a login)
When I login, my .profile speaks a welcome message by sending a string to the
speech device.
I also set up my bash promptcmd so it speaks something after each command is
successfully executed. For times when things go wrong (as they did when I was
building and testing emacspeak) I also have a shell script "speak" that sends
its argument to the speech device.
So for instance
speak `pwd` tells me the working directory etc.
So even if emacs does not start up successfully I get some feedback, and have
some hopes of figuring out what the machine is upto:
(from the above you'll realize that I am completely dependent on the speech
output)
Finally once emacs is up, I have full control of the machine.
Before I sign off, could you tell me what your interest in speech interfaces
comes from?
Thanks,
--Raman
Back to Emacspeak Home Page
Back to Emacspeak FAQS
[email protected]
Last modified: Thu Jun 29 08:58:14 1995