Icon for: Eli Meir

ELI MEIR

SimBiotic Software
Public Discussion

Continue the discussion of this presentation on the Multiplex. Go to Multiplex

  • Icon for: Eli Meir

    Eli Meir

    Lead Presenter
    May 13, 2018 | 10:12 p.m.

    Hello, I'm excited to share this virtual lab we built and a bit of the research that accompanied it. I'll be happy to answer any questions, and also very interested to hear others experiences playing with the level of constraint / openness given to students in STEM activities.

  • Icon for: Heather Howell

    Heather Howell

    Researcher
    May 15, 2018 | 11:16 p.m.

    Really interesting work. One question that came to mind for me was the degree to which the quality of the feedback matters, or how quality and immediacy interact. In other words, part of the challenge of giving feedback in the more open situations is (I imagine) that its harder to generate good feedback. But I could imagine that there would still be some effects associated with feedback that's only mostly on the mark, and probably an identifiable point where this benefit would drop off. Or you might look at high quality feedback in an open condition but vary the immediacy. It seems like your system offers a nice experimental space in which to take up this kind of question.

  • Icon for: Eli Meir

    Eli Meir

    Lead Presenter
    May 16, 2018 | 09:56 p.m.

    Thanks for watching and for your comments. We have a little bit of data on this question. In a different aspect of the project (where we were trying out intermediate constraint formats for questions that might otherwise be short-answer format), we had some students that received an intermediate constraint format with generic feedback, and others that received feedback specific to their answers. The students that got generic feedback didn't improve on their initial answer, while those who received specific feedback tried more times and improved. So specific feedback does matter. We also did a controlled experiment with the experimental design simulation shown where we removed the Check My Setup button that provides feedback. Interestingly, those students on average spent more time on the simulation, making more different designs and running them, but on average were less successful at reaching an error-free design, and were less likely to include replication, including on follow-up experiments where no students received feedback. So the feedback appears to help students learn more efficiently as well as better.

    That doesn't quite get to your point, however, about the quality of feedback. It's an interesting question, but I don't think I can address it with our current data unfortunately. If I was guessing, my guess is that if you aggregated categories (so for instance we have dozens of answer categories, I believe, maybe instead have just four or five big ones), you'd still see benefit.

  • Icon for: Daniel Damelin

    Daniel Damelin

    Facilitator
    Senior Scientist
    May 14, 2018 | 03:29 p.m.

    This project sound intriguing. I think I can imagine an environment with high and low constraints. What features make for an intermediately constrained environment?

    Do you use click-stream log data as part of your analysis for giving feedback or is it more focussed on specific student settings and responses?

    In a project I'm working on called InquirySpace we are also trying to understand how best to scaffold student understanding and engagement in scientific practices.

  • Icon for: Eli Meir

    Eli Meir

    Lead Presenter
    May 14, 2018 | 04:48 p.m.

    Hi Daniel, thanks for stopping by.

    We've tried varying constraint in various settings, but in the lab activity we highlight in the video, what we mean by intermediate constraint is, for instance, instead of allowing students to continuously vary a couple of potential causative variables to their experiments (parasite load; herbicide amounts), students can either add or not add those to each experimental plot (binary choice). Similarly, other variables are limited to certain sets of choices rather than continuously variable. More broadly, they have between 1 - 8 experimental plots. etc.. So there is still a large space for them to explore, but quite a bit less than we could have given them within the simulation we have.

    One of the guiding philosophies for our project was to design the activity to make it amenable to categorization and feedback, rather than first designing the activity and then looking for algorithms to categorize. That was the genesis of thinking about constraint from the get-go. It later turned out that constraint, in and of itself, seems useful for students. In this case, we don't use the clickstream at all for categorization, rather we use the setup the student has at a point in time. I think raw clickstreams seem more informative than they really are in many cases.

    Inquiryspace looks interesting - I've seen various bits of it. Lots of nice tools at Concord Consortium. Thanks for your comments.

  • Icon for: Carrie Willis

    Carrie Willis

    Facilitator
    Technology Director and Teacher
    May 16, 2018 | 09:26 p.m.

    Sounds very interesting! Is this only geared toward online ungrad students? Is the feedback always algorithm generated or do teachers have the option of human feedback as well? Love the possibilities with this concept model. 

  • Icon for: Eli Meir

    Eli Meir

    Lead Presenter
    May 16, 2018 | 09:31 p.m.

    Carrie,

    Thanks for watching. While the module is likely useful in high school as well, we currently focus on college undergraduates. However, it is used in many in-person classes too, not just online courses. With this audience, we made all the feedback algorithm-driven because many of these classes are too large for instructors to give individual feedback in a timely way (and we wanted basically instant feedback to help students learn). But there are a lot of opportunities for instructors who wish to use the module to generate class discussions about the particular case, and about experimental process in general. Instructors can also ask students to write up a report from their data. So plenty of ways for teachers to take things beyond what we give students. 

  • Icon for: Kiley McElroy-Brown

    Kiley McElroy-Brown

    Researcher
    May 17, 2018 | 10:22 a.m.

    This is an interesting project, especially since you are primarily working in undergraduate classrooms. Can you say more about the specific courses that are using this tool? Are the classes for first-year students? How are the instructors being informed of the feedback that students are receiving? And, do you know what they are doing with that information?

    I'm also interested in how students are making conclusions about their experiments. In your video, you indicate that open-response questions are difficult to analyze for instant feedback. Are students writing their own conclusions within your module? If not, why did you make that choice?

  • Icon for: Eli Meir

    Eli Meir

    Lead Presenter
    May 17, 2018 | 10:52 a.m.

    Hi Kiley, thanks for taking a look.

    The lab is being used primarily in college introductory biology classes (both majors and non-majors) though a fair number of second and third year biology classes are also picking it up. Primarily in ecology, environmental science, and related fields, but a few in more cell/molecular areas.

    We have a whole interface for the instructors where they can see student work on questions and activities within our labs. For the complex design activity we showcase, our output to the instructors is not that sophisticated at the moment. We tell them about the students last design in each activity before they moved on (there are two separate experimental design activities, the second slightly more complex than the first). For each design, we tell the instructor how we categorized the students design - what feedback category we put them in. We also give them a summary of the students design in text - for each experimental plot the student uses, we show what the student put into that plot.

    Conclusions are hard, and we punted on that a little. In preliminary iterations, we had an open-response box for students to give a conclusion. However, in final release versions we now avoid any use of open-response because of the grading burden in large intro science classes. So we tried two other intermediate-constraint interfaces which we've worked on during this project, we call them WordBytes (see paper by Kerry et al, 2017) and LabLibs (a MadLibs-type interface). The WordBytes was too challenging - students wanted more choices than we could provide and score because the variety of different ways students draw conclusions from an experiment was really high. So, paradoxically, we ended up with a more constrained LabLibs interface where students construct a sentence that reads "My results ______ the hypothesis that _______ cause the Simploid sickness, because ______ in the experimental group was ______ in the control group". Each blank has several appropriate choices for students to select from. This seems to work well from a usability perspective. We don't have any validity data. There is an obvious point at which instructors can ask students to write a scientific report from their experiments if they want a more free-form conclusion. I'd imagine you have similar issues with Geniventure in terms of the range of possible conclusions students might draw and how to capture those without a lot of instructor effort. We did not come up with a perfect solution but hope that helps.

  • Icon for: Jim Hammerman

    Jim Hammerman

    Facilitator
    Co-Director
    May 20, 2018 | 12:54 p.m.

    Your SimBio project sounds fun and engaging, and your results point to some interesting and important findings about how some -- but not too much or too little -- structure may help support student learning about experimental design, especially if you want to be able to offer timely feedback to students.

    I wonder whether and how the simulations you've created connect to real biological issues -- do the simulations teach concepts about biology that matter? -- and what range of types of environmental experiments you're able to support?

    You might also check out the work of the EcoXPT project, which is trying to teach middle school students about experimentation in the context of an environmental mystery. Here's the link from last year's video showcase: http://stemforall2017.videohall.com/presentatio...

  • Icon for: Eli Meir

    Eli Meir

    Lead Presenter
    May 20, 2018 | 07:59 p.m.

    Hi Jim, thanks for your thoughts. In general, most of our simulations connect directly to real biology, usually using stories from particular biological studies to motivate that content in the module. In this one, we deliberately used a fake organism because we found (in a previous iteration) that when we had a real biological story in there (in that case, an evolutionary story about snails being preyed on by crabs), the story made it harder for students to think about experimental process- they had to think about both evolution and experiments, and that was a higher cognitive load. So we then went to the made up and very simple "simploids" story so that the students could focus exclusively on the experimental process itself. We don't have the data worked up enough to give concrete evidence of that being the right decision, but anecdotally it seems like it to us.

    Thanks for the link to EcoXPT - I am familiar with that cool project.

  • Icon for: James Diamond

    James Diamond

    Facilitator
    Research Scientist
    May 20, 2018 | 12:54 p.m.

    Hi—thanks for sharing this! I really like this notion of intermediate constraints a lot, as it sounds like a sweet spot between too-constrained and too-open. Have you focused at all on PD related to feedback yet, or is that the next project? :-) What I'm wondering about is, how prepared are educators to use the data in order to address things like common misconceptions or misunderstandings about experimental design that students face. Thanks again for sharing. This sounds like great work.

  • Icon for: Eli Meir

    Eli Meir

    Lead Presenter
    May 20, 2018 | 07:56 p.m.

    Hi James, thanks for looking and for your comments.

    We have not thought at all about PD. We work almost exclusively at the college level, and while we provide instructors guides and suggestions, we assume students are mostly going to be working on this on their own, and that the lead instructors in the course are going to decide for themselves how to do follow-up. In general, while PD can be just as valuable at the college level as in K-12, its often hard to ask faculty to fit that in amongst their other responsibilities.

    I will say, most of the confusions we capture are pretty clear to someone who has a PhD, and likely things that these faculty have encountered quite often previously if they have tried to teach experimental process.

    But it would certainly be a good idea to extend this research to PD in the future. Thanks for the thoughts.

  • Trevor Haney

    K-12 Teacher
    May 20, 2018 | 07:12 p.m.

    This simulation-based lab looks like a lot of fun and I can see how students would really become engaged in the learning. The learning and constraint graph you show is very eye opening. Does the experiment in the simulation take a step by step approach to the problem addressed, and could students come up with a multitude of different answers to solve this problem? Are you focusing on students learning the steps to solve the problem or the outcome of the simulation or perhaps both?

  • Icon for: Eli Meir

    Eli Meir

    Lead Presenter
    May 20, 2018 | 08:02 p.m.

    Hi Trevor,

    Thanks for taking a look and glad you found something interesting in the video. The module has two sections - we focus on the second section in the video, which is the more open-ended section and yes, there are many paths for students to take, and in fact the simulation is complex enough that most students get part, but not all, of the story by the end of their exploration (an opening for a class discussion should the instructor wish). The first section, though, is more of a tutorial - more step-by-step guidance on the elements that go into a good experiment. We split it that way because of some data we had in previous iterations of the module and from interviews of students - hopefully, we'll have some of that ready for publication soon.

  • Further posting is closed as the showcase has ended.