World knowledge (assertions such as “you can slice an apple”) is useful in a variety of applications ranging from annotating video segments to generating motion plans for robots. While people generally already have this kind of knowledge, giving a machine this insight is a challenging problem. However, with the growth of the internet, there has been an increasing amount of data to learn from; people post more pictures and write more text every day. From these, one relatively untapped source of data lies in the domain of cooking, in the form of cooking recipes. As instructional texts, recipes hold information about states of objects and what changes they undergo to reach a goal, and thus present an opportunity to learn world knowledge for applications in automatic illustrations and video annotation. I will present a method to try and extract this information from these data.