Experience with spoken language starts prenatally, as hearing becomes operational during the second half of gestation. While maternal tissues filter out many aspects of speech, they readily transmit speech prosody and rhythm. These properties of the speech signal then play a central role in early language acquisition. In this study, we ask how the newborn brain uses variation in duration, pitch and intensity (the three acoustic cues that carry prosodic information in speech) to group sounds. In four near-infrared spectroscopy studies (NIRS), we demonstrate that perceptual biases governing how sound sequences are perceived and organized are present in newborns from monolingual and bilingual language backgrounds. Importantly, however, these prosodic biases are present only for acoustic patterns found in the prosody of their native languages. These findings advance our understanding of how prenatal language experience lays the foundations for language development.