Pages

2010-12-04

WiFi on Lenovo Ideapad B450 with Fedora 14

I recently installed Fedora 14 on a Lenovo Ideapad B450 for PDRC and I have had problems with getting the Wi-Fi to work. I am posting this so Google search engines can find it.

The Lenovo Ideapad B450 has an Atheros Gigabit ethernet card and an Atheros AR5001 wifi card. I had installed Fedora 12 on it previously and it worked with no problems. After upgrading it to Fedora 14, the wireless card stopped working. Network Manager in GNOME will have "wireless disabled" and no amount of clicking will enable wifi. After some digging and trial and error I blacklisted the acer-wmi kernel module and wifi was working again.

  1. Go to /etc/modprobe.d/
  2. Edit blacklist.conf and add the following line: blacklist acer-wmi
  3. Reboot!

2010-10-04

Deaf denied exit by Bureau of Immigration

UPDATE:
As of 8 am, October 7, 2010, the Bureau of Immigrations under the directive of Under Sec. Jovy Salazar of the Department of Justice has placed Mr. Franklin Corpuz on a Qatar-bound flight this morning. This is to rectify the situation and treatment Mr. Corpuz received on 4 October, 2010. This was made possible thru the appeal of Prof. Raul Pangalangan of the U.P. College of Law. A full investigation is ongoing. Mr. Corpuz intends to file formal legal action against the Immigrations Officer. He STILL needs your support for this.

Please send LETTERS OF PROTEST to:

  1. snail mail: Atty. Ronaldo Ledesma
    OIC
    Bureau of Immigrations
    Magallanes Drive, Intramuros
  2. email (please CC me: ed.cabalfin@gmail.com)
    Dept. Of Justice Action Center
    Attn: Romeo D. Galvez
    dojac@doj.gov.ph
    soj@doj.gov.ph
  3. by fax: Tel. 521-1614

This is not an official press release. I just want to get this out so more people will know. I'll update this post as soon as I get more information. This is from an email sent by Dr. Martinez of the Philippine Deaf Resource Center (PDRC), edited to include links and notes.

Summary
Name of passenger: Franklin Galano Corpuz
Disability: Deaf
Flight: Bound for Qatar via Qatar Airways QR 645, October 4, 2010; 635a.m.

Documents presented at Immigrations (shall be faxed separately):

  1. Affidavit of Support from Al Mana Interiors
  2. Detailed Invitation to take part in Business Training
  3. Visa paper from Ministry of Interior, State of Qatar
  4. Passport with Qatar visa
  5. Airplane ticket

Purpose of trip:
Invited by company for Business Training and orientation on program for Persons With Disabilities and eventual employment. Program specifically recruits PWDs, two other Filipinos have been placed in Qatar, one Deaf, the other mobility-challenged(Link)

Highlights of incident:
F. Corpuz arrived at NAIA early 4 October 2010 and checked in. He was accompanied by a Qatar Airways flight attendant to the Immigrations counter. Upon being told by the flight attendant that Mr. Corpuz is Deaf,the Immigrations counter officer directs him to the Bureau of Immigrations Office. He is interviewed there by an Immigrations officer (male) who says that he will be disallowed because he is “deaf and CANNOT SPEAK”. He was then asked to leave the airport.

[PDRC] placed a call to the NAIA Immigrations Office this morning and spoke with Ms. Gladys Castillo and Mr. Jeff Ignacio (administrative staff). They said that as a matter of policy they are not allowed to release the name of the Immigrations Officer who interviewed Mr. Corpuz. Ms. Castillo said to send any letters of complaint to Atty. Ronaldo Ledesma, OIC of Bureau of Immigrations. Mr. Ignacio on the other hand said that in their Indicator Checklist for Offloading, the remarks were as follows: “No sufficient proof that his trip is for business to Qatar considering that he is deaf and mute.” He said that it was not specified as to what proof or documentation was lacking.

2010-09-22

Clarification and Correction

I recently got interviewed by loQal about my research. You can read the article here. I guess I didn't explain some things clearly enough and I'd hate to give a false impression regarding the research. Here are some items I'd like to clarify or correct.

  • The Filipino Sign Language (FSL) Archive project is a collaboration between:
    1. Philippine Deaf Resource Center (PDRC) - an NGO
    2. Philippine Federation of the Deaf (PFD) - an NGO
    3. Digital Signal Processing (DSP) Lab of the Electrical and Electronics Engineering Institute
    4. Computer Vision and Machine Intelligence Group (CVMIG) of the Department of Computer Science

    DSP and CVMIG are both of the College of Engineering, UP Diliman.

  • The FSL Archive Project is a separate project from the Filipino Speech Corpus (PSC) project. For one thing, Sign is not Speech.
  • As far as I know, the linguistics research is being done by PDRC and PFD, not UP.
  • I don't have an application or system yet that can convert FSL into text. That is a long way off. What I have are experimental programs. Nothing practical. Also; syntax and semantics of FSL is currently poorly understood. Until we get a better handle on that, FSL to text sentences is not possible.
  • FSL vs ASL (vs SEE vs MCE). It cannot be denied that American Sign Language (ASL), Manually Coded English (MCE) and Signing Exact English (SEE) has a huge influence on FSL; however, many Filipino Deaf refer to their language as Filipino Sign Language. This is a social, cultural and political issue in addition to a technical issue. For example, the Deaf I met in Cebu called their sign language Cebu Sign Language. And yes, there is a lot of variation between regions, and provinces.
  • FSL vs English (vs Tagalog). This one confuses a lot of people. Sign is not Speech. FSL is not English. FSL is not Tagalog. FSL is a separate, distinct language. It helps if you think of Written English as a separate language from Spoken English. There is no equivalent "Written FSL". To facilitate research, signs are assigned a label called a GLOSS. It is a word or phrase borrowed from another language. Since many Deaf in the Philippines have Written English as a second language, the GLOSS is borrowed from Written English. It is often written in ALL CAPS to distinguish it from Written English (example: THINK-SKIP-MIND). Note that while the GLOSS is chosen to be as close to the meaning of the sign as possible, this is not a translation. This is one reason why you sometimes see Tagalog used as a GLOSS (example: LOLA).

I think that covers most of it. If you have more questions, leave a comment. Thanks for reading!

2010-09-21

Difficulties in Sign Recognition

What makes it hard to do Sign Recognition? I touched this topic briefly in an earlier post. Simply put, there are a lot of things going on in sign language. In spoken languages, you just have to listen to one thing; in sign language you pay attention to the face, the body, the hands and arms simultaneously.

Another part of the problem is the complexity of sign language itself. Facial expressions, and body posture are part of the language. Some form of facial recognition and expression detection is needed (although I ignore this in my research, a topic for another post). The signs themselves vary when used in a sentence, much like the sounds of words change slightly when spoken in different sentences, and in different contexts.

Variation is another source of problems. Each individual performs the sign differently, similar how different people sound different in spoken languages. Even from the same individual, the signs vary slightly when done at different times. And top it off with regional and local variations of the same sign. This is one reason why I restricted my research to signs used in Metro Manila; if I didn't I'll never finish.

The other source of difficulty is general difficulty of computer vision. How do you tell which is the background vs the foreground? How do you distinguish several people in one image/video? Humans have an incredible ability to figure out faces and postures even when viewing from the side, how do we duplicate this ability in computers? To reduce these issues, I recorded one person signing wearing a plain black shirt in front of a plain black background.

2010-09-09

Bibliography

This is a partial dump of the references I have used so far.

  • Rafaelito M. Abat and Liza B. Martinez. The history of sign language in the philippines: Piecing together the puzzle. In 9th Philippine Linguistics Congress, Diliman, Quezon City, Philippines, 2006.
  • Julius Andrada and Raphael Domingo. Key findings for language planning from the national sign language committee (status report on the use of sign language in the philippines). In 9th Philippine Linguistics Congress, Diliman, Quezon City, Philippines, 2006.
  • Yvette S. Apurado and Rommel L. Agravante. The phonology and regional variation of filipino sign language: Considerations for language policy. In 9th Philippine Linguistics Congress, Diliman, Quezon City, Philippines, 2006.
  • Robin Battison. Lexical Borrowing in American Sign Language. Linstok Press, Silver Spring, MD, 1978.
  • Marie Therese A.P. Bustos and Rowella B. Tanjusay. Filipino sign language in deaf education: Deaf and hearing perspectives. In 9th Philippine Linguistics Congress, Diliman, Quezon City, Philippines, 2006.
  • Phil. Deaf Resource Center and Phil. Federation of the Deaf. Part 1: Understanding Structure. An Introduction to Filipino Sign Language. Phil. Deaf Resource Center, 2004.
  • Phil. Deaf Resource Center and Phil. Federation of the Deaf. Part 2: Traditional and Emerging Signs. An Introduction to Filipino Sign Language. Phil. Deaf Resource Center, 2004.
  • Heeyoul Choi, Brandon Paulson, and Tracy Hammond. Gesture recognition based on manifold learning. Structural, Syntactic, and Statistical Pattern Recognition, 5342:247–256, December 2008.
  • Philippe Dreuw, Carol Neidle, Vassilis Athitsos, Stan Sclaroff, and Hermann Ney. Benchmark databases for video-based automatic sign language recognition. In International Conference on Language Resources and Evaluation, Marrakech, Morocco, May 2008. http://www-i6.informatik.rwth-aachen.de/~dreuw/ database.php.
  • Raymond G. Gordon Jr., editor. Ethnologue: Languages of the World, 15th ed. SIL International, Dallas, Texas, 2005. http://www.ethnologue.com/.
  • Sushmita Mitra and Tinku Acharya. Gesture recognition: A survey. IEEE Trans. Systems, Man & Cybernetics, 37(3):311–323, May 2007.
  • Phil. National Statistics Office. Persons with disability comprised 1.23 percent of the total population. Special Release No. 150, March 2005. http://www.census.gov.ph/data/sectordata/sr05150tx.html.
  • Sylvie C.W. Ong and Surendra Ranganath. Automatic sign language analysis: A survey and the future beyond lexical meaning. IEEE Trans. Pattern Analysis & Machine Intelligence, 27(6):873–891, June 2005.
  • World Health Organization. Deafness and hearing impairment. Fact Sheet N300, March 2006. http://www.who.int/mediacentre/factsheets/fs300/ en/index.html.
  • Sam T. Roweis and Lawrence K. Saul. Think globally, fit locally: Unsupervised learning of low dimensional manifolds. Science, 290(5500):2323–2326, December 2000.
  • Joshua B. Tenenbaum, Vin de Silva, and John C. Langford. A global geometric framework for nonlinear dimensionality reduction. Science, 290(5500):2319–2323, December 2000. http://waldron.stanford.edu/~isomap/.
  • Christian Philipp Vogler. American Sign Language Recognition: Reducing the Complexity of the Task with Phoneme-Based Modeling and Parallel Hidden Markov Models. PhD thesis, University of Pennsylvania, 2003.

2010-08-04

Manifold Learning - part 2

Dimensionality Reduction

Trying to make sense of 19,200 dimensions is asking for trouble. Fortunately for us poor humans, most data are constrained in some way. For example, the data varies but it doesn't vary along all 19,200 dimensions at the same time; it varies along some of the dimensions, some of the time. If we know how and when the data changes, we can approximate our data with a smaller set of dimensions. This is the known as dimensionality reduction.

An Illustrated Example

Let's take a two-dimensional example. Let's say the data we have collected come in pairs, and when we plot them it looks like this:

Unfortunately, the analytical tools that we have only work in one-dimension. We need to reduce the number of dimensions before we can analyse it. Fortunately for us, it seems the data we have (almost) fall along a straight line.

Let's rotate our plot such that the line becomes the new X-axis. It's still the same data, we just changed the way we look at it. Notice that the data (blue squares) are very close to the new axis (red line).

If the variation along the new Y-axis is much, much smaller than the variation along the new X-axis, we can approximate our data by it's projection along the new X-axis. We can pretend that the projections (red circles) are our data (blue squares) if our data is very, very close to the new X-axis (red line)*.

We can now use the projections in our tools because it has only one dimension. We have reduced the number of dimensions of our data from two to one. Yes, errors will be introduced since the projections are not the same as our data. As long as the variations along one (new) axis is much, much larger than the other (new) axis, the error will be small.

Principal Component Analysis (PCA) is one such method that does this, applicable in many problem domains.

* Let's ignore what we mean by "very, very close" for now.

2010-08-01

Manifold Learning - part 1

Background: How many dimensions?

When we talk of dimensions in casual conversation, we often recall high school geometry. A point has zero dimensions, a line segment has one dimension (length), a rectangle has two dimensions (length & width), and a block has three dimensions (length, width & height).

We can also think of dimensions as a tuple, or a set of numbers, and this set of numbers describe something. For example, you can think of color in terms of Red, Green, and Blue components. We can say color has three dimensions (R, G and B). The same color can be represented with a different set of numbers; Cyan, Yellow, Magenta, and blacK. This time, color has four dimensions (C,Y,M, and K). If we are consistent with our set of numbers, we can describe many things. eHarmony supposedly has 29 dimensions to describe each person. It simply means they use 29 numbers to describe a person, whatever those numbers are supposed to measure.

Now analysing three dimensions is straightforward. We can turn it into graphs and plots, and it is easy to visualize. Four dimensions, a little harder but doable (look up color solid or color space sometime). But, 29 dimensions? How about 19,200 dimensions? We need help for those.

2010-07-26

Why sign language?

I find sign language fascinating.

  • It's a language unlike the any other languages I know.
  • It uses a different set of body parts to communicate.
  • Sign language linguistics is hard.
  • Sign language recognition is hard(er).

And it surprises me.

For the longest time, I thought sign language used only the hands and arms. Only when I started reading about it and interacting with the Deaf, did it become clear that it uses the whole body. The face and body posture is important in discourse. That's the exciting part, we (I) don't know how important it is, yet. Sign language linguistics is in its infancy when compared to spoken and written linguistics.

FSL also opened my eyes to the plight of the Deaf. The most valuable lesson I have learned is this:

The most important thing about sign language is the Deaf person using it.

2010-03-12

Video to Data

or How Do We Represent Video as an Input to Our Various Algorithms?

What is video anyway? To oversimplify, video is a series of images shown one after another. There are many display and storage formats; but, in essence, they are all just a set of images. How fast, how often the images are presented is measured in frames-per-second (fps). For example, television in usually shown at 30 fps. Each frame is one image or picture; so, television will show us 30 images* per second, one after another. You can think of video as time-series data.

Let's turn our attention to each frame (image) in the video. How many numbers do we need to represent an image? It depends on the size of the image. For illustration purposes, let's assume that our image is 160 pixels high and 120 pixels wide**. That means we have 19,200 pixels (160 x 120) to represent the image. If we have a color image/video, each pixel has color information -- what colors are present at that particular pixel of the image. Depending on how color was encoded, we could have 3 or 4 numbers to represent color. If we have a gray scale image, each pixel will only have the intensity information -- how dark or bright is that particular pixel. Thus, a gray scale image will need 19,200 numbers to represent it. If we treat video as time-series data, each data point will have 19,200 numbers associated with it. And that is exactly how the FSL recognition system we implemented treats video data. We can do this because all the images (frames) in a video has the same size.

* For TV, it's actually half-images per second. To keep the discussion simple, I'm ignoring that.

** It doesn't matter what storage format was used in the original video; at some point, it will have to be displayed on the screen, which has pixels.

2010-02-27

Intelligence, Consciousness, and Life

Warning: This post is just me rambling.

Whenever I talk about my research sometimes the topic meanders to Artificial Intelligence. Specifically, two questions come up; (1) are these machines really thinking? And (2) will machines become as intelligent as people? It seems that people conflate three things in these discussions: Intelligence, Consciousness and Life.

Because us humans are Intelligent, Conscious and Alive, we often think of them as one property -- being Human. So when we say something is Intelligent, we automatically assume it also means that we are saying that something is Conscious and Alive. Thus, when we talk about Artificial Intelligence, sometimes this is confused with Artificial Consciousness and Artificial Life. And this is on top of the usual confusion with respect to Intelligence, Consciousness and Life.