Skip to main content


Welcome, the Hub connects all projects

MSPnet Academy Discussions


All Topics for this Forum

Topic: "MSPnet Academy: Tools to Support Inquiry"

Topic Posts

MSPnet Academy Discussion
November 4th - November 18th, 2015

Presenter: Robert Tinker, Founder, the Virtual High School and the Concord Consortium

Bob will demonstrate an integrated set of computer-based tools that students can use to undertake sophisticated, open-ended investigations that are similar to the approach and thinking used by scientists. This allows students to experience the practices of science as envisioned in the NGSS standards. Although the project focused on secondary physics and physical science content, it should be widely applicable because all science disciplines depend on investigating cause and effect, which is what this free, open source toolset enables.

This archived topic is open to the public.

restart conversation
This topic has 3 posts, showing all.

Lots to chew over

posted by: Brian Drayton on 11/5/2015 7:37 am

Hi, Bob,
thanks for the presentation. I am really interested in the Inquiry Space system, and we will want to brainstorm with you about possible collaborations.
Listening to your presentation, though, I realize that there are several separate innovations wrapped up in one, and each of them is represents a different set of R&D challenges from the classroom point of view. The ones on my mind this morning are:
1. The Bayesian Knowledge tracing-- this is interesting, and one of the first positive uses of Big Data that I've seen. But right now I see it primarily as a way to throw up more hypotheses to be tested (or observations to be pondered)-- the "learning patterns" that your example provided would be all to easy to take too seriously. So a cool thing would be to generate several of these, in well-characterized situations, and then attack them with other methods, to understand what's accidental and what might relate to specific mechanisms, so that they don't get fossilized too quickly into official Misconceptions. One thing I was wondering was, what is the "switch" that happens when, in one pattern, the students are making little forward progress, and then take off. What happened?

2. The Problem Space reasoning-- I am really intrigued by this, and I'd love to hear more about it. This seems like a big challenge for teachers to learn, as a way of thinking, and then as something to use/support in teaching. Your collaborating teachers should be analyzed as case studies!

3. Deeply digital I really like the way you've identified barriers and addressed them (at least to a first approximation), and I would think that this "barrier theory" (reminds me of the old paper on "critical barriers phenomenon" from the ND study group, by Hawkins, Morrison and someone else) could be a really interesting research area in itself, related to teacher knowledge and practice.
Hurray for inquiry!
- brian

Learning Patterns

posted by: Robert Tinker on 11/5/2015 4:57 pm


First let me thank all the participants in my webinar. I compressed a lot into a short time and I look forward to expanding these ideas together. Please feel free to comment and ask questions.

There was interest in the learning patterns that emerged from the BKT analysis and I was asked to describe the categories. Not having them at my fingertips, I sort of punted during the webinar. I want to correct the record. Here are the categories that we observed as reported in http://concord.org/publications/newsletter/2015-spring/analytics-stude nt-learning

A1 Steady and fast mastery of the challenge where scores start low but then quickly become near perfect.
A2 Steady but slower mastery of the challenge taking significantly more trials to move from low scores to high scores.
B Slow mastery of the challenge after receiving low scores for some time and then changing to high scores with relatively large fluctuations during the change.
C Fast mastery of the challenge from medium to high scores with relatively small fluctuations.
D Re-mastery of the challenge with relatively large score fluctuations.
E1 Near perfect scores, indicating high degree of prior knowledge about the challenge.
E2 Perfect scores, indicating students mastered the knowledge prior to the challenge.

BKT did not distinguish A1 from A2 nor E1 from E2--these were seen in the structured analysis of long screencasts. But the five categories A through E emerged from both the BKT statistics and the analysis of video--giving us confidence that the statistics was seeing something real.

As Brian suggests, these categories should not be over-generalized. We need to determine how stable the are for different students and games. Patterns D and E have been observed in at least two other studies. They are probably quide common: kids zoom through (E) or they struggle (D). Patterns B and C are what we hope for--a good match between students and instruction as seen by a slow start and then acceleration, indicating that they figured it out. (A) is an alternative pattern that seems to show learning, too. We can only hope that further studies will tell us which of these are stable, which depend heavily on the particular context, and whether there are other common patterns.

Problem Space Reasoning

posted by: Robert Tinker on 11/5/2015 5:24 pm

Brian wanted to know more about our measure of PSR. We have an internal, non-reviewed, technical report on this that I will be glad to share with any professional that requests it: write me directly at bob@concord.org. I cannot post it because it contains information on the schools and the actual test.

Here is the executive summary from that report.

The InquirySpace project has identified and theoretically characterized Parameter Space Reasoning (PSR) as the type of cognition necessary for students to successfully engage in inquiry-based experimentation. PSR is associated with planning experiments, operationalizing a set of parameters, navigating the parameter space through multiple experimental runs, identifying patterns in parameter space plots, and reflecting on sources of error.
This theorization of the PSR construct was used to develop the PSR test. In this document, the construct validity of the PSR test was examined using the Rasch-Partial Credit Model which assumes single dimensionality among the items in the PSR test. The Rasch-PCM analysis results indicate that
(1) the PSR test was highly reliable,
(2) scoring rubrics reflected students' performance levels
predicted by the underlying PSR construct, and
(3) items in the PSR test were considered forming a uni-dimensional construct.
Instructional sensitivity of the items on the PSR test was also examined. Students improved the most on the items addressing the experimentation context identical to the IS curriculum activities. The student improvement became less and less prominent as the item context was farther and farther
removed from the IS curriculum contexts.The impact of the IS intervention on students' learning of PSR was compared across three teachers. The results indicated variations among three teachers in terms of how much students were impacted by the IS curriculum...