Teachable machines enable children to explicitly train a machine learning (ML) model based on data and labels generated by them. Their iterative nature holds the potential for developing creativity, flexibility, and comfort with ML. However, many ML initiatives assume some level of programming knowledge or employ gesture recognition tasks that can ultimately be difficult for children to inspect and contrast when building training sets. We explore how children use machine teaching interfaces with a team of 14 children (aged 7-13 years) and adult co-designers. Children trained image classifiers and tested each other’s models for robustness. Our study illuminates how children reason about ML concepts, offering these insights for designing machine teaching experiences for children: (i) ML metrics (e.g., confidence scores) should be visible for experimentation; (ii) ML activities should enable children to compare training sets and strategies; and (iii) classification tasks should promote quick data inspection (e.g., images vs. gestures).