[FrontPage] [TitleIndex] [WordIndex

Jon Cass

1. Kyoung-jae Kim, Ingoo Han. "Genetic Algorithm Approach to Feature Discretization in Artificial Neural Networks for the Prediction of Stock Price Index." Graduate School of Management, Korea Advanced Institute of Science and Technology

2. Why: I am very interested in Artificial Intelligence programs that can recognize patterns and act accordingly. I think that a program that could find patterns in the stock market would be both very useful and extraordinarily interesting.

3. Summary: This article has two main topics. It discusses predicting stocks in general and the algorithms used to pick the weights on the indices, and it discusses how the authors' implementation uses genetic algorithm techniques to "reduce the complexity in the feature space". The article begins with a list of previous work in Neural Network prediction of stocks, which is a topic that has been researched a fair amount. It then shows its specific genetic algorithm approach to choosing connection weights for the various predictive criteria that are employed in these models. Finally, it talks about what makes the authors' implementation unique - the use of GA to make the search space smaller; in other words to reduce the dimension of the problem.

4. Purpose: The authors are attempting to predict the behavior of the stock market. There are, in general, too main reasons for doing something like this. One practical, and one theoretical. For the practical side, being able to predict the stock market would make a person very rich. From the theoretical standpoint, being able to write a program that could predict stock behavior would be a great achievement in pattern recognition and neural networks in general. The authors seem much more interested in the theoretical side.

5. Feedback: The article was pretty good on the whole. It was fairly easy to read, but assumed a good deal of background knowledge on the stock market and on the general approach to solving this type of problem. I would not be able to come close to reproducing their work after finishing the article, though I have more of an idea of where to start and what other resources might be valuable, in addition to knowing more about the problem of stock prediction in general.

6. Genealogy Project: No results on either author at the genealogy project. Googling reveals that both are active researchers in Asia. Ingoo Han was recently at the Pacific Rim International Conference on Artificial Intelligence. A Google search for Kyoung-jae Kim yields a page that says he has worked extensively with Ingoo Han and Kyong Joo Oh and Hyunchul Ahn - on evolutionary techniques, GA specifically, and often applied to financial problems.

Paul Mandel

1. KrishnaKumar, K, Karen Gundy-Burlet. "Intelligent Control Approaches for Aircraft Applications." NASA Ames Research Center

2. Why: I love robotics, and using AI to do more efficient control under a wider variety of inputs is extremely interesting to me. I worked in close proximity to several teams this summer who were using a control system that is very similar to this, so I got to learn a little bit about it.

3. Summary: This is a very basic article outlining the basic principles behind neural network adaptive control. It shows the basic system components and how they go together. It also discusses the previous research that has been done on nerual network adaptive control and how that research relates to the current research. One interesting part of the article talks about the actual ratings of how well the neural network adaptive control works vs. non-adaptive neural networks.

4. Purpose: The specific work that the authors are doing is a very current and interesting problem. As planes get more and more complicated, designers want more and more to give the pilots an extremely simple interface that works intuitively (for pilots) and also works under as many plane-states as possible. For instance, with a correct neural network control system, a plane could get most of one wing blown off and the neural network would adapt and very quickly be able to correct for the change in plane-state and let the pilot keep flying the plane almost as normal.

5. Feedback: I found the article extremely easy to understand to the point of being simplistic. I have a better article somewhere in my room on the same subject, but I didn't have time to find it before this assignment. This article spent most of its time explaining how the author's work related to other similar work, instead of explaining how the author's work actually functions and is constructed.

6. Genealogy Project: The author of this article was not found in the database :-(. And googling doens't reveal any information, either.

Nikolaus Wittenstein

1. Brooks, Rodney, Cynthia Breazeal, Matthew Marjanovic, Brian Scassellati, Matthew Williamson. "The Cog Project: Building a Humanoid Robot." Massachusetts Institute of Technology

2. Why: I went looking on Google Scholar for articles about AI in real-time computer games, because I'm interested in that sort of thing. I couldn't find any really good articles that actually talked in-depth about interesting stuff, but I went through a chain of references and ended up at this article and it was majorly interesting so I read the whole thing.

3. Summary: The main thing this article discussed is the way that people learn and act. It compared these methods with the current methods that people are trying to implement in their artificial intelligence projects, and then went on to discuss how the Cog team is trying to strike out in a different direction and make their robot move and learn in the way that humans do. There's a lot of discussion about how humans sense their environments and interact with objects, as well as how they use the world to their advantage. For example, there's no need to have an internal representation of the world to figure things out with if you've got the actual representation of the world right in front of you.

4. What the authors were doing: The authors of the article seemed to feel that if they could get a robot to learn in the way that humans do then it solves nearly all the major problems that AI has: babies come into this world not knowing how to do anything at all, and yet they have a framework in place that allows them to bootstrap (a term used frequently in the article) knowledge about the world and how to interact with it. If a robot could be made to actually learn in the way that humans do, then further programming in many areas would become obsolete, because theoretically the robot could learn way faster than one could program it.

5. Feedback: I totally enjoyed this article. I love reading about the way that brains work, and human-like AI is something that I've thought about many times. Almost all of the article was easily accessible, except for a few equations that governed the robot's arm motions. From the article nothing really can be replicated, because most of the things that are discussed are fairly broad and vague. That doesn't stop it from being fascinating, however. Most of the questions I had were answered by the end of the article, and considering the method that I used to find it in the first place, it definitely lives up to my expectations.

6. Authors:

  1. Rodney Brooks: PhD from Stanford in 1981; he seems to have been at MIT ever since.
  2. Cynthia Breazeal: Not in the AIGP. Studied at UCSB then went to MIT; been there ever since.
  3. Matthew Marjanovic: Not in the AIGP either. Seems to have attended MIT originally and just stayed there.
  4. Brian Scassellati: Not in AIGP. Works at Yale as a CS professor; used to work with Cog at MIT.
  5. Matthew Williamson: Not in AIGP. Attended Oxford, and apparently works for HP in the UK now. Weird.

EvanMorikawa

1. Roberts, Lawrence G. "Machine Perception Of Three-Dimensional Solids." Massachusetts Institued of Technology, 1963.

2. Why: I have always been fascinated with the way robots perceive the world. Of all of the available senses, it is vision that seems the most captivating to me; both for its potential as an extremely useful tool for a robot, and for the inherent complexity that is involved. The computational challenge of machine vision makes me think about and appreciate the power of my own mind and how sophisticated my eyes actually are.

3. Summary: The process of reconstructing a 3 dimensional image from a 2 dimensional picture is one that is heavily dependent on human knowledge, context, and subtle visual cues. Rules of depth perception, pattern gradients, and other techniques are implemented in this paper to obtain a 3 dimensional rendering of a 2 dimensional image. The process takes an image of a 3D shape, and then edge detects the lines around the object. The data is then analyzed using extrapolation methods and is then stored in a “four-dimensional, homogeneous system of coordinates,” which contain position and translation data. From this virtual representation, a flat image can be rendered out from any perspective. This early paper on machine vision assumed that relatively simple 3D objects were the subject of the experiment. This mean that the algorithm could look for polygons and determine the transformations of the rudimentary shapes. In essence, a scene was broken up into simple polygons and then reconstructed again. Much of the transformations presented in this paper are based upon mathematical expressions and are mostly theoretical.

4. Initial Problem: There has always been a need for a machine to be able to sense its surroundings. One of the most useful sensory inputs for humans, yet the most complicated, is vision. While the benefits of computer vision are fantastic, the challenges required to successfully extract the relevant data at reasonable speeds have proven unwieldy. Dr. Roberts in the early 1960s attempted through this paper to outline fundamental methods to extract 3D data from the flat images of cameras. At this time, the concepts of machine vision were very primitive and this is one of the original papers pertaining to analyzing images to produce virtual 3D representations of an object.

5. Feedback: This paper is a bit dated, yet still very informative. I felt that the paper was a bit on the long side, but it was extremely detailed in the mathematical constructs surrounding the process, as well as very useful images to describe the process of the computer. While this paper definitely outlines the most primitive stages of machine vision development, I would have trouble seeing how the techniques implemented here would apply to modern machine vision in a far more complicated world. The only pieces being analyzed were very defined, geometric shapes in a very controlled environment. It would have been interesting to attempt the algorithm on a more messy scene.

6. Genealogy: Dr. Lawrence Roberts got his BS, MA, and PHD from MIT in the early 1960s. In his early career was interested in machine vision, but then developed communications protocols. More specifically HE INVENTED THE INTERNET. Dr Roberts along with Leonard Kleinrock, Vinton Cerf, and Rober Kahn are considered to be the forefathers of the internet due to their involvement with ARPA and groundbreaking work in packet switching and the eventual internet protocol. See Dr. Robert’s homepage here: http://www.packet.cc/index.html

Ben Hayden

  1. Genetic-Programming.org Wikipedia Citeseer

  2. Why: programs evolving programs to solve problems that i have no idea how to attack make me all wubbly inside.

  3. Summary:

    • John Koza wrote some cool Lisp programs that wrote other cool Lisp programs that mortals couldn't write. Whoo-ah.
  4. Initial Problem

    • I don't have his book, and I can't figure out which program came first. The primary applications of genetic programming are black magic fields such as antenna design and other hardware design.
  5. Feedback:

    • The Web design for gp.org is horrible, and it doesn't contain all the information i'd need to replicate it, but, because of wikipedia and other gp toolkits elsewhere on the web 1 2 3 4, I think that I could write my own "automated invention machine", and I'd like to do so for my final project.

  6. Koza is not in the aigp, but gp.org says he got all of his degrees from U Michigan.

Zack Coburn

  1. "Approximating Game-Theoretic Optimal Strategies for Full-scale Poker"

  2. Why: I think poker is a fascinating game, and I think it's even more fascinating that it is as difficult as it is to make a computer play poker optimally or even well.

  3. Summary:

    • The authors produced an approximately optimal strategy for two-player Texas Hold'em poker by reducing the game space from 1018 to 107 by making a series of approximations. They discussed the merits of such approximations as bucketing similar cards or hands together, changing the betting structure, and changing the number of cards. They tested their approximately optimal poker player against other poker bots and metrics (such as "always call") and against human players of different levels (including one master, who ended up winning after 7000 games).

  4. Initial Problem

    • The authors saw other poker abstractions or approximations for which optimal or nearly-optimal solutions are known (such as preflop Hold'em and Rhode Island Hold'em). They wanted to find the most realistic abstractions for two-player Texas Hold'em and test a player based on these abstractions against humans and computers. The University of Alberta has a history of artificial intelligence research, particularly in the area of poker. This paper was written three years before the University of Alberta won the first AAAI Computer Poker Competition in 2006.
  5. Feedback:

    • The article makes sense, but it should show more of the actual implementation instead of just showing the results. It would be impossible to replicate the work without more information. Nonetheless, it provides a good vantage point for anyone who wishes to develop a poker playing program.
  6. Genealogy

    • The authors are from the Department of Computing Science at the University of Alberta. Billings is a Ph.D student, Burch is a programmer analyst, Davidson graduated with an M.Sc. in 2002, Holte is a professor, Schaefer is a professor, Schauenberg graduated with an M.Sc. in 2006, and Szafron is a professor.

Andy Kalcic

Article: Julia M. Taylor, Lawrence J. Mazlack (2004) "Computationally Recognizing Wordplay In Jokes," Cognitive Science Conference Proceedings (CogSci 2004), August, 2004, Chicago, 1315-1320

In my Wellesley class, philosophy of language, my professor expressed doubt that a formalization could be made describing relevance (in the context of generating implied meaning of utterances). I thought it would be fun to try to do so, specifically in trying to identify puns and generate the double meaning. So I googled for ai humor, and found a group in Cincinnati working on the problem.

This article basically lays out this groups approach to identifying and generating humorous wordplays in knock-knock jokes. They have a wordplay generator, which replaces characters in a given word with characters that produce similar sounds, filling a heap with many “words” that sound similar, ranked by how similar they are. Then it goes through each one, checking to see if it can be decomposed into a word or phrase, and if it cant parse an entry, it generates similar “words” from that entry. Once it finds a wordplay, it checks to see if that wordplay makes any sense in the context of the punchline based on statistical information concerning the proximity of different words. The program correctly identified the wordplay in 85 of 130 knock knock jokes, identified 62 of 66 non-jokes as non-jokes, and came up with punchlines containing wordplays for 110 of 130 joke setups.

While there have been attempts to make joke generators before (and a few attempts to identify Japanese puns), this group is the first to try to use a theory-based algorithm to generate jokes. This particular program is intended to be an initial step towards more generalized humor programs, with the knock knock joke selected because of it’s very structured format.

I thought this paper was very interesting, especially the manner in which they try to generate wordplays. They give a fairly detailed but understandable explanation of their process, so if I were to try to build off of this, I’d feel fairly confident that I could reproduce the general algorithm. This research team has actually posted many other related articles, some of them describing their more recent work. The only complaint I have of it is that it doesn’t say what problems the program ran into when trying to identify the wordplays in the test set of knock knock jokes. Did it fail to find any wordplays, or did it find incorrect wordplays? Even so, it’s a pretty exciting result.

Julia Taylor is a Ph.D. student working with Mazlack, so she doesn’t have a terribly exciting history (though she does travel to exciting locations like Italy, Germany, France, the Netherlands, and Ohio to present their work). Mazlack is at the University of Cincinnati, doing research in database theory, data mining, semantic web, and humor.

Chris Stone

  1. "Vision-based Autonomous Landing of an Unmanned Aerial Vehicle" by Srikanth Saripalli , James F. Montgomery and Gaurav S. Sukhatme

  2. Why:

    • I was looking through articles and I always thought robot vision was very difficult, but the way this paper described it I feel as though I could replicate what they did (which is sweet).
  3. Summary:

    • The authors created an algorithm that allows an autonomous helicopter to identify a helipad and successfully land on it. It uses GPS to get close to the helipad, and then starts looking for the landing pad. It converts input images to black and white, and does a quick filter to get rid of a lot of the noise. It then does blob finding, and discards blobs over or below a certain range of pixels, leaving only possible helipad sizes. It then calculates the various moments of inertia, the center of gravity, etc. for each of the shapes, and compares it to the values it knows an H should have (H being the shape of the landing pad). It can then see where the landing pad is, and what the orientation is. They were able to actually have an autonomous helicopter land on a helipad and orient itself correctly repeatably and accurately.
  4. Initial Problem

    • The authors decided to create a real-world application of robot-vision, and essentially solve it. Their justification was that autonomous helicopters can be made much smaller and cheaper than helicopters that carry humans. Therefore, they are more nimble and can go more places, and they are safer since human life isn't at risk. So basically they wanted to use robotics to save money, time, training, and lives.
  5. Feedback:

    • This article was very accessible, and also taught me some things I didn't know. Obviously looking for a unique shape is easier than other robot-vision problems, but using center of mass and moment of inertia and other physics principles to identify what shape they were looking at was particularly cool.
  6. Genealogy

    • Srikanth Saripalli graduated from USC and is currently doing research at their Robotics Embedded Systems Lab. James F Montgomery has a B.S. from University of Michigan (in 1986) and a M.S. and P.H.D. from USC (all three are in Computer Science). He is now working at the NASA Jet Propulsion Laboratory. Gaurav S Sukhatme studied at IIT Bombay in C.S. and Engineering, and then got his B.S. and P.H.D. in Computer Science from USC. He is now an Associate Professor of Computer Science at USC.

Ayla Solomon Genetic Algorithms for Gait Synthesis in a Hexapod Robot

1. I’m interested in genetic algorithms and machine learning, and this article about Rodney was an interesting and slightly more understandable article about a robotic application.

1. Summary

1. Why

1. Feedback

1. Genealogy

Mike Hughes

"Toward Natural Language Computation" Alan Biermann and Bruce Ballard. Duke University. American Journal of Computational Linguistics, Volume 6, Number 2, April-June 1980.

1. Why: I’ve been interested in computational linguistics ever since the Microsoft Paperclip first told me (incorrectly) that my subject didn’t agree with its verb. The English language, in all its vagueness, provides a very interesting problem space. I’m especially interested in how a computer could go about identifying synonyms and using context clues in evaluating sentences correctly.

1. Summary: The authors describe a program they created called the “Natural Language Computer”, or NLC. This program is basically an English-speaking simplification of MATLAB. The NLC allows uses to manipulate matrices of data using input commands given in everyday imperative English sentences. NLC understands simple commands like “Create a 3 by 3 matrix called myData” and “Multiply myData by 2”. It also supports more complicated functionality, like “Fill myData with random numbers” and “Delete the second column of myData”. The program even supports abstract references like “Add 2 to the second row. Then double that row.” But by far the coolest feature of NLC is that users can define entire functions using natural language, and can then call upon those functions with one sentence in future commands. The bulk of the paper deals with how the program interprets English commands properly. In a later section, the authors describe how the program behaved surprisingly well when it was tested by 23 undergraduate students with only a cursory knowledge of the system. Over 80 percent of the 1581 commands given by these students were interpreted correctly, and half of the errors were attributed to an incorrect human input rather than a failure in NLC’s interpretation. These results indicate that, at least for this specific problem domain, natural language interfaces are certainly achievable and worth pursuing.

1. Initial Problem The authors built NLC in order to investigate the validity of critics’ claims that natural language was not a viable method of human – computer interaction. The authors wished to show that, given the right domain and constraints, an implementation could be made that would both prevent users from having to learn specific machine syntax and allow them to express their thoughts naturally, while ensuring that commands were clearly defined and unambiguous for the computer to interpret.

1. Feedback: For the most part, this article was easy to understand. The explanations and examples were kept simple, and the figures allowed me to gain an even better understanding of the program’s flow and behavior. I did have some trouble interpreting some of the grammatically-technical sections, which used a lot of terminology to describe how NLC parsed sentences into “verbicles,” “particles” and “operand level noun groups.” I was able to understand most of the description of the implementation. I believe that given a good amount of time and some additional resources on natural language parsing, I could replicate most of this work (if not the abstract referencing and function definition, at least the simpler matrix manipulation functionality).

1. Genealogy Alan Biermann received his Ph.D. from UC – Berkeley in 1968. At the time of writing, he was a computer science professor at Duke. He remains there in that capacity today. Bruce Ballard was a Ph.D. student of Biermann’s at Duke, graduating in 1979. At the time of writing, he was a computer science professor at Ohio State. Since then, he has worked in the field of computational linguistics at Rutgers University and AT&T Bell Laboratories.

Brad Westgate

  1. "Monte-Carlo Planning in RTS Games" Michael Chung, Michael Buro, and Jonathan Schaeffer. University of Alberta. Edmonton, Alberta, Canada.

  2. Why:

    • I'm a Warcraft III addict. The current state of RTS AIs is pretty bad, so I wanted to know what progress has been made in developing better ones.
  3. Summary:

    • The authors created a general Real-Time Strategy game player based on Monte-Carlo (statistical sampling) simulations. The general planning algorithm for their player is this:
    • Based on the current game state, think of as many plans as possible for my own strategy and my opponent's strategy.
    • Randomly pick a plan for me and a plan for my opponent, and evaluate what would happen.
    • Repeat for as long as possible, given time constraints.
    • Choose the plan that has the best statistical result.
    • Repeat when the game state has changed (if units have died, etc.).
    • They implemented this algorithm in a simple RTS Capture-the-Flag game. They found that it was able to defeat simple or random strategies, but have not yet matched it against high-level scripted strategies.
  4. Initial Problem:

    • RTS games are extremely popular. Therefore, making AIs for them is an interesting problem. Also, RTS games are not so different from real war, so there is some futuristic military value toward this research as well. However, making AIs for RTS games is hard. There are many factors making them more difficult to plan for than simple two player turn-based games, including:
    • Imperfect information. You do not know exactly what your opponent is doing, so a simple search through a move tree will not work.
    • Real time. Decisions need to be made immediately. State is changing continually.
    • Lots of things to do. A good AI needs to control individual units (micro), manage the overall battle, and make high-level decisions about unit choices and resource gathering (macro).
    • Because of this difficulty, all current RTS computer players use scripts. These tell the player what to do and when, but of course make the player predictable and easy to beat. These authors are among the first attempting to use evaluation functions to solve the problem instead of scripts.
  5. Feedback:

    • I enjoyed the article. A lot of the discussion was abstract, but it would be possible to reproduce their work. Their work is definitely in the early stages. The game they tested their algorithm on was a simple one, and the opponents were bad. Still, I like the general approach. The difficulties are in generating all the plans (figuring out what is possible from the current state), and in the evaluation function, which compares the plans. Also, the whole process needs to be repeated frequently, because the game state has changed.
    • Since there are so many RTS games, there is a lot of room for projects on this. Apparently, there is an open engine called ORTS (Open RTS), which allows people to build RTS games and test strategies.
  6. Genealogy:

    • One of the three authors, Jonathan Schaeffer, is on aigp. He got his Ph.D in 1986 from University of Waterloo (Canada). The authors are all associated with the Department of Computing Science, University of Alberta.

Thomas Michon

Steering Behaviors For Autonomous Characters by Craig W. Reynolds

Why

Summary

Initial Problem

Feedback

Genealogy

Matt Donahoe

  1. "Hierachical Temporal Memory" http://en.wikipedia.org/wiki/Hierarchical_Temporal_Memory

  2. Why:

    • This seems to me like the closest people have come to creating intelligence.
  3. Summary:

    • Jeff Hawkins studied how the brain works and created a model of how intelligence works in general. From this a MatLab model was created that can do human-like image recognition on simple shapes.

  4. Initial Problem

    • Computers are really good at searching large amounts of data and following strict rules, but they aren't very good at finding things that are similar to a search request. Image recognition is a classic example of this problem, and their system demonstrates its power by correctly identifying shapes that "look like" other shapes.
  5. Feedback:

    • They are careful not to release exactly how their system works, although they do talk a great deal about the basic idea. There is a tree structure of thinking nodes which pass causes and beliefs back and forth.

    1.Genealogy:

    • Jeff Hawkins got his B.S. in EE at Cornell in 1979 . He is most famous for creating the Palm Pilot.

Connor Riley

  1. "The Unfriendly User: exploring social relations to chatterbots"

  2. Why? I'm very interested in user interfaces and Human-Computer Interaction. Natural Language got me to the John Lennon Artificial Intelligence Project which got me to Alice and this paper, which was more serendipitous than expected. I love the internet.

  3. Summary The users analyzed a group of people's interactions with Alice, a chatterbot which won the 2000 Loebner Prize (a prize for the 'most human' robot that, somewhat by the prize's definition, doesn't pass the Turing test.) By doing so they hoped to define how people expenct to interact with a computer and find some middle ground between those who say chatterbots are the customer service of the future and those who suggest that anthropomorphizing a computer will lead the user to ascribe humanlike traits that the computer does not have, hampering the interaction. Through the analysis of chat logs, the researchers found that people are willing to divulge a lot of personal information to the bot, expect the bot to disclose its personal information in return, may have competitive or cooperaative attitudes toward it, but generally expect the bot to be subordinate to their human intelligence.

  4. Initial Problem The authors saw the growing popularity of chatterbots as indicative of a growing tendency for humans to anthropomorphize computers,expecting them to be 'friendly' and humanlike. To test the boundaries of this relationship, they presented users with a computer that was as humanlike as possible in order to qualify the boundary between the anthropomorphization and the understanding of the computer as a tool.

  5. Feedback The analyses of social dynamics in the beginning are very dense but nonetheless interesting. If felt it was important to include as much human social psychology as possible in order to compare and contrast the later examples of how people talked to the program. It was compelling in that I felt it would be interesting research to do, given a little more study of sociology.

  6. Genealogy The authors are more HCI than AI researchers and aren't in the database.

Brendan Doms

Why

Summary

Initial Problem

Feedback

Genealogy

Andy Barry

Why

Summary

Why

Feedback

Genealogy

Doug Ellwanger

Hierarchicial Temporal Memory: Concepts, Theory, and Terminology by Jeff Hawkins and Dileep George, Numenta Inc. http://en.wikipedia.org/wiki/Jeff_Hawkins http://www.stanford.edu/~dil/

Why

Summary

Why

Jeff Hawkins is the inventor of the Palm Pilot, the Treo, and the Grafitti character recognition system. He has expressed a large interest in applying understanding of the brain to AI. He published a book On Intelligence that describes some of how the brain works and how HTM models that. He then founded a company called Numenta that plans to develop an HTM research platform.

Feedback

This white paper gives a good broad overview of HTMs, but it is lacking a number of details, such as what is actually passed between nodes and how “dirty” inputs are mapped to “clean” quantization points. I would like to know more in order to be able to implement an HTM, which should come with the company’s research release.

Genealogy

Stephen Longfield

  1. Stanley: The Robot that Won the Darpa Grand Challenge

  2. Why:

    • I'm working in the Olin Intelligent Vehicle Lab this semester, and I worked there last summer, and I'm currently working on the robotic tractor for ROCONA, so part of my reason is that I want to understand how the winner of the DGC programmed their car. I've heard a lot of stories about the DGC cars (especially the Red Team over at CMU, and Team ENSCO which had our very own James Whong and George Harris as programmers), with a lot of positive talk being focused at Stanly. I've also talked to the Olin SCOPE DGC2 team that's working with Team MIT, so I've heard some of the issues that they're trying to deal with. It's a very interesting subject that has had quite a bit of money put into it, and it something that seems like it shouldn't be too complex, but there are all kids of little things that trip everyone up.
  3. Summary:

    • There were quite a few things addressed in the paper, but the main point (and the most interesting to me) was integrating the sensors and deciding where to drive or not. They used a system that first synthesized 5 LIDARs (mutli-point laser range finders), and found a 'driveable-non-driveable' map out of it. This map was then filtered using probalistic algorithms. It was then referenced to a camera image, which determined the colors of the driveable area, and then used this to extrapolate what other areas should be driveable, to give a better estimate of how far ahead the road was clear. These things combined with the pitch and accelerations through another probalistic algorithm to generate speed and wheel angle measurments. One very interesting thing about both of the probalistic algorithms is that they were based on machine learning algorithms, which were both trained by taking data from people driving the car, and from the car itself driving (as in, if it drove over an area that it had thought wouldn't be driveable, it would note that in the probablistic driveable-not algorithm). Also, the camera-road processing was able to build on knowledge of the road that it was driving on, but at the same time was able to learn a new road very quickly.
  4. Initial Problem

    • The main problem that they were adressing was how to get an autonomous vehicle to run through a course, and not hit anything. The global path-planning had been done and presented as a rough RDDF file, but local path planning, obstacle detection and avoidance, as well as velocity decisions, and many other things needed to be synthesized into a way to navigate around the desert.
  5. Feedback:

    • The paper was able to cover a lot of ground in the 32 pages of length. I think that I could actually replicate a lot of what they did in the paper, if I sat down and spent quite a bit of time with the math presented. Some of it I don't know if I could do exactly, since it's fairly complicted processing (especially the machine vision), but they fairly clearly outlined the algorithms used. One interesting point is that the rivalry between Stanford and CMU was VERY evident in the article, and they were very proud of passing H1ghlander. There is a full-page section, that is simply the image record of Stanly passing H1ghlander, and the point that it passes is marked with a green 'x' on the map. It is also the only other competting robot referred by name to in the paper.
  6. Genealogy

    • There were a ton of people working on the project, and who are referenced as authors of the paper. However, the head of the team, and the main author of the paper was Sebastian Thrun. His homepage is 'robots.stanford.edu', he is Associate Professor in the Computer Science Department at Stanford University and Director of the Stanford AI Lab, and has written a book called 'Probabilistic Robotics'. He is currently working on the 2007 DGC, as well as teaching classes.

Erin Kelly

  1. A context-dependent attention system for a social robot by Breazeal, C and Csassellati, B.

  2. why:

    • I remember when Kismet was new, and it fascinated me. I realized that my interest in it hasn't changed
  3. summary:

    • The article attempts to reproduce human attention and perceptions. Kismet takes images, and compares successive images for motion, it looks for human faces, and bright colors, under the assumption that a bright color will mean a toy. Depending on what state kismet is in, it will alter the weights of what it looks for, and how long things hold its attention.
  4. initial problem:

    • In building a robot capable of social interaction, one of the points made by the authors is that social skills require complex perceptual, motor, and cognitive abilities. In this paper, the authors focused on a human-like attention system to fit in with the other systems.
  5. feedback:

    • I understood the article, and I could possibly replicate the work. The methods for identifying faces, colors, and motion are there, as well as how kismet sees. The algorithms for kismet’s state are not covered, and the state is relatively important for what kismet looks for.
  6. genealogy:

    • Neither of the authors appear on the genealogy site, however both worked at mit on kismet between 1998 and 2000, when all of their papers were published. Based on number of papers by the two authors on kismet, they were the two primary researchers. Breazeal got her phd at mit in 2000, with a dissertation on sociable machines.

Ben Fisher Logical Hidden Markov Models, Kristian Kersting Luc De Raedt Tapani Raiko

This fairly lengthy paper is about an idea for enhancing Markov models by allowing them to consider structure and current state. I chose this article because in Software Design we are currently making simple Markov models for literature. I am also interested in the consequences of a probabilistic model that can provide an infinite number of different paths.

A logical hidden Markov model, or LOHMM, first analyses a series of data, and then creates a probabilistic model of the patterns in that data. For example, the program was given a log of UNIX commands used. Being a LOHMM, the program took into account different users and the state of the program. Because each UNIX command has a fairly uniform structure, this was suited for analysis. For example, in sample runs of the “Model,” the model will often run cd foldername after mkdir foldername, because the model says that this step is very likely. The program is very unlikely to generate the same sequence twice. A LOHMM is useful for recognizing and inferring patterns. The three most important observations that the program must make are: determining the probability that a given sequence was generated by a given model, determining the sequence of internal states that was most likely to produce the series, and estimating the parameter weights in a distribution from a given sequence. Using these observations, the program could distinguish significantly between programmers and non-programmers using UNIX. The authors then discuss many practical applications for the LOHMM models, including protein folding and mRNA. They can be used whenever discrete sequences appear in a certain patterns. The paper is ended by detailing how their ideas differ from current Markov models, and providing an appendix with proofs.

The paper was interesting and I was not left with many questions. I did not understand all of the notation in the paper, particularly the formal descriptions. Some helpful pseudo-code was given, but it would still require effort to duplicate the results. I would say that the content lived up to my expectations.

The authors are evidently not in the AI Genealogy Project. Kristian Kersting Luc De Raedt Institute for Computer Science Freiburg, Germany

Tapani Raiko Laboratory of Computer and Information Science Helsinki University of Technology

Eric Hwang

1. Chutiporn Anutariya, Vilas Wuwongse, Kiyoshi Akama, and Ekawit Nantajeewarawat. "RDF Declarative Description (RDD): A Language for Metadata."

2. Why: I chose this article because I was looking into the semantic web. Paging through the articles on the W3C’s page about the semantic web brought me to this article.

3. Summary: The Resource Description Framework, or RDF, is a framework recommended by the W3C for metadata. However, it cannot represent many properties, such as transitivity, and it also cannot process questions about the metadata. As a solution to this problem, the authors of the paper developed the RDF Declarative Description, or RDD, to enhance RDF. The paper first lays the groundwork for RDD by generalizing RDF elements, which explicitly describe things, into RDF expressions, which generally describe categories of things by replacing the explicit parts of RDF expressions with variables. The RDF expressions can then be specialized back into explicit RDF elements. Then, the paper describes the RDD language, more specifically constraints and RDF declarative descriptions. Constraints are predicates that constrain the data in some way, such as ensuring that one variable is the product of two others. Declarative descriptions are sets of statements about specific systems. The language, which is a strict mathematical model of metadata structuring, allows both implicit conclusions about data to be drawn, as well as allowing queries about the data to be made.

4. Purpose: The authors conducted their work in an attempt to improve the RDF framework by adding the capability to handle more logical properties and to process queries about data. RDF as it was could not handle things like transitivity, nor did it provide a way to process questions about the metadata. They built their RDD atop the existing RDF framework using declarative description theory and gave examples in DAML+OIL, a specific RDF markup language.

5. Feedback: The general ideas in the article were quite understandable, mostly because of the concrete examples given. However, I couldn’t understand many of the mathematical formulations made in the article. I personally would not be able to replicate their work; however, an expert in the RDF field would probably be able to do so using the paper and its references. After reading the paper over several times, I still had questions about some of the formulations used in the paper. I was fairly satisfied with the paper, though I wish it would have given more examples of applying the formalism they describe in the earlier parts of the paper. The content was about as expected; it described a way to model metadata that allowed for more complicated operations.

6. Authors: The authors weren't in the genalogy project, probably because they aren't traditional AI researhchers. Googled information follows: Vilas Wuwongse studied entirely at the Tokyo Institute of Engineering, and since then has been a professor at the Asian Institute of Technology (AIT) in Thailand. Chutiporn Anutariya studied at AIT, where she is now a research assistant. Kiyoshi Akama has researched at the Tokyo Institute of Technology and Hokkaido University. His profile does not say where he studied. Ekawit Nantajeewarawat studied at Chulalongkom University, then at AIT, both in Thailand. After graduating, he worked in industry for a couple years, then became a professor at AIT, later moving to the Sirindhorn International Institute of Technology.

AndyGetz

Computing machinery and intelligence - A. M. Turing - http://www.abelard.org/turpap/turpap.htm - 1950

I selected this article because it caught my eye - it is old and famous and I'd never read it. I'm also interested in the idea of putting enough human knowledge into a computer that it might imitate a human.

In response to the question "Can machines think?" the article begins by defining the Turing test in strict terms, starting with the gender-based "party game" and covering definitions of what kinds of machines are permissible, etc. Following the definition of the question, a number of answers to the contrary are reviewed and rebuffed. Turing consiers counter-arguments on (among others) theological, mathematical, and philosophical grounds. Notable are the objections that a discrete-state system is too limited to describe human thought, that human behavior is chaotic and non-repeatable, and that human behavior is not only based on what is scientifically observable and quantifiable. The final section of the paper discusses machine learning, hypothetical though it may have been at the time, in response to Lady Lovelace's objection in particular.

I understood the article very well - the arguments were precisely and clearly phrased, and the language was technical and precise without a loss of clarity or simplicity. I suppose it asks more of a question than it truly answers, but that is after all the point of the paper.


2013-07-17 10:46