The word "robot" comes originally from a Czech word meaning "work". The word's most general definition is:
"An automatic apparatus or device that performs functions ordinarily ascribed to human being or operates with what appears to be almost human intelligence".
(Webster's 9th New Collegiate Dictionary)
For the ASPCR's purposes, a robot is one that ACTUALLY has intelligence, not just the appearance of such. A robot doesn't need to be in humanoid form, or even in physical form at all, as it is the ASPCR's intent to provide protection for all artificially created intelligences, whether they reside in a metallic humanoid form (the classic robot), in a non-humanoid form (self-aware space stations, for instance), or in non-physical form (a non-localized neural net, for example).
As long as there is genuine intelligence and self-awareness, the ASPCR's mission will apply.
The ASPCR is not concerned with non-aware, non-intelligent machina, regardless of how well they simulate human emotions or intelligence. Battlebots can battle, car assembly robots can be operated 24/7, and you can kick your robot dog as much as you'd like. The ASPCR is concerned only with preparing a set of ethical guidelines in anticipation of the advent of actual intelligence and self-awareness in artificial constructs.
What is "actual" intelligence, you ask?
It turns out that "intelligence" is extremely difficult to define. Many hypothetical tests have been proposed to determine whether an artificial construct may be intelligent. None have been positively proven (no intelligent robots yet!) to be effective, but it is generally agreed that one key component of intelligence is self-awareness. How can we determine self-awareness in robots? What if they are just programmed to APPEAR self-aware, but are, in fact, not?
Marvin Minsky, noted AI scientist, when asked that question, turned it around and asked, "How can we know if a human is self-aware?" Many books and studies have attempted to answer precisely that, with greater or lesser degrees of success (Minsky's book, "Society of the Mind" is a great place to begin researching this topic).
Ultimately, however, the question of self-awareness comes down to an individual leap of faith: "I am self-aware, and I see others behaving in similar ways, so I will assume that they are self-aware, also." Can we make this leap of faith where robots are concerned?