One of the promises of big data in higher education (learning analytics) is being able to accurately identify and assist students who may not be engaging as expected. These expectations, distilled into parameters for learning analytics tools, can be determined by human teacher experts or by algorithms themselves. However, there has been little work done to compare the power of knowledge models acquired from teachers and from algorithms. In the context of an open source learning analytics tool, the Moodle Engagement Analytics Plugin, we examined the ability of teacher-derived models to accurately predict student engagement and performance, compared to models derived from algorithms, as well as hybrid models. Our preliminary findings, reported here, provided evidence for the fallibility and strength of teacher-and algorithm-derived models, respectively, and highlighted the benefits of a hybrid approach to model-and knowledge-generation for learning analytics. A human in the loop solution is therefore suggested as a possible optimal approach.