Even though the scientific question whether the concept of a “Self” is necessary to explain human behavior is still open, it is obvious that the concept does play a role in everyday behavior: laypeople attribute a self to other humans, but also to non-human animals or technical systems, and they treat them accordingly. For instance, agents that are perceived as having a self are treated more carefully and politely, and they receive more empathy. But what are the criteria for attributing a self to another agent? This project aims to tackle this question by using a “synthetic” approach. We will program small, very simple robots in such a way that they show behavioral characteristics that are likely to solicit the attribution of a self—such as causality, human-like movement speed, behavioral efficiency, learning ability, and social sensitivity. Human participants will be confronted with videos of behavior of otherwise identical robots showing or not showing these characteristics, and participants will be asked to assess both robots on a number of self-relevant scales. Those behavioral characteristics that lead to a significant increase in the attribution of a self will be combined and implemented in a robot. Participants will be presented with the behavior of this robot and with an otherwise identical robot that is controlled by another human. To the degree that participants will no longer be able to tell these robots from each other, we assume to have identified the criteria for attributing a human-like self. We will then investigate in which ways the attribution of a human-like self will affect how humans treat a robot: whether they show more empathy, trust it more, conform more to its behavior, and treat it less aggressively. In a concluding cooperation experiment, we will implement all relevant behavioral characteristics in a humanoid robot, in an attempt to get as close as possible to an artificial but still human-like self.