Clip this onto educlipper

Thursday, December 8, 2016

IDET 6430 Wrap Up

Where to begin on what I learned this semester, almost too many things to write about. Overall, I learned a lot about the IDET process and how integral each step really is. I think it is of utmost importance to plan and plan as effectively as possible in the beginning. Certainly you may not know everything that will come up, but having a place to start is important and will certainly guide you through the process.

Getting good feedback is essential for your project. But getting good feedback is not easy to get. Proper planning and time needs to go into getting good feedback and you need to go to people that will not just pat you on the back but look for the blindspots in your project. Having your clientele look at your materials before you publish them can really help guide the process. You will learn real quick if it is going to work by their reactions to the materials.

One of the other important things was communication in our group. We developed a system for getting work done and helping each other with each piece of the project. Luckily I was working with a great group of like minded individuals and all of us were eagerly working together for the same goal. I can see that this would be problematic if group members didn't work well together and/or didn't have a good way of communicating with one another.

Getting to the point and determining your objectives is key to planning everything with your project. This is the part of the process you want to spend a lot of time going over and over and making sure you are really getting the main objectives figured out. There is a saying "The main thing is to keep the main thing, the main thing" I really feel that this idea is the point of determining your objectives. You can certainly not spend enough of time working on objectives, but I don't think you can spend too much time working on them.

Overall, I learned that there is a lot to this process and it takes a lot of time and planning to get started and then a lot of good feedback to help sort the project out. You need to have a good SME and know the culture and have an understanding of the client. All this can seem daunting and it kind of is, that's why you need to have a plan to tackle it all. Without a plan you wouldn't know where to start and what to do. They say "If you fail to plan, you plan to fail" and that is definitely the case. Having a plan can really help you get through the drudgery of the work and sort through what is important. In the end your project will only be as good as the plan you create, so its best to spend some time in the beginning to develop your plan and then you will need to constantly evaluate throughout the process to make sure you are meeting your goals and that your plan is going to work. You should have a good understanding throughout the process as to whether or not you are on the right track.

Tuesday, November 15, 2016

Visual Media

Here are my thoughts on visual media as per our upcoming presentation

One of my biggest pet peeves is being in a presentation and having the instructor just read off slides that are text heavy. I was at presentation not too long ago in which the speaker referenced his slide deck and had 0 images, none, zip, zilch, zero. It was unbelievable really that a speaker using a projector and a slide deck didn't use a single image to illustrate his points. Its not like it would have been difficult, many of his ideas would have been easily portrayed with an image.

Item #1 for effective visual presentations is to use pictures. If I was presenting on the election and what the electoral map looked like and I was doing it without any visuals my audience would be lost as soon as I started. On the other hand a good map with identifiable color, text, and symbols can really enhance a discussion on the electoral college
This image helps make the complications of 50 states of differing sizes and numbers of votes understandable. Trying to explain this topic without a visual would be nearly impossible.

#2 Don't bullet point and read everything. Too much text and reading is an absolute waste of the tool. College professors are notorious for this bad behavior and poor use of technology. We all know they like to hear themselves talk, but c'mon don't read it too.

#3 A good presentation slide deck shouldn't make a lot of sense without the presenter. There should be images and text but the presentation shouldn't necessarily make sense if you are just looking at the deck. A good presenter weaves a story and ties everything together. That's the magic of a good presentation, don't tell and show everything all at once.

Good visual design should ensure everything is legible and understandable, increase engagement, and focus attention. If you are checking your phone or looking at the clock somewhere along the presenter failed in one of these areas.

Tuesday, November 1, 2016

Here are my notes for the debate: Consistent evidence is found for the generalization that there are no learning benefits to be gained from employing any specific medium to deliver instruction


Most current summaries and meta-analyses of media comparison studies clearly suggest that media do not influence learning under any conditions (Schramm 1977)


Basically, the choice of vehicle might influence the cost or extent of distributing instruction, but only the content of the vehicle can influence achievement. (Schramm 1977)


Media as simple vehicles for instructional methods, such as text organization, size of step in programming, cueing, repeated exposures, and prompting. (Lumsdaine 1963)


Media are mere conveyances for treatments being examined and are not the focus of study
(Salomon and Clark 1977)


Most media comparison studies to that date had been fruitless and suggested that learning objectives can be attained through “instruction presented by any of a variety of different media”
(Levie and Dickie 1973)


Media comparison studies, regardless of the media employed, tend to result in “no significant difference” conclusions (Mielke 1968)


Causal connections between media and achievement are confounded.


Only .2 standard deviations differences in the final exam scores of audio tutorial and conventional treatments. This difference was equivalent to approximately 1.6 points on a 100 point final exam. This small effect is not instructionally significant and could easily be due to confounding. (J. Kulik, Kulik, and Cohen 1979)


We have reason to believe that the lack of difference is due to greater control of nonmedium variables.


The weak but positive finding for college use of computers over conventional media is due to systematic but uncontrolled differences in content and/or method, contributed unintentionally by different teachers or designers (C. Kulik, Kulik, and Cohen 1980)


There is evidence in these meta-analyses that it tis the method of instruction that leads more directly and powerfully to learning. It seems not to be media but variables such as instructional methods that foster learning (Glaster 1976)


Novelty effect, these gains tend to diminish as students become more familiar with the new medium. An average effect size of .32 for computer courses tended to dissipate significantly in longer duration studies. In studies lasting 4 weeks or less, computer effects were .56 standard deviations. This reduced to .3 in studies lasting 5 to 8 weeks and further reduced to the familiar .2 effect after 8 weeks of data collection. .2 is weak and accounts for less than 1% of variance (Cohen 1977)


Computers are less novel experiences for college subjects than for secondary students.


Five decades of research suggest that there are no learning benefits to be gained from employing different media in instruction, regardless of their obviously attractive features or advertised superiority...clearly indicate no significant differences


Studies comparing the relative achievement advantages on one medium over another will inevitably confound medium with method of instruction.


Media are delivery vehicles for instruction and do not directly influence learning. Certain elements (like zooming in) might serve as sufficient conditions to facilitate the learning of students the skill being modeled.


We will not find learning differences that can be unambiguously attributed to any medium of instruction


My favorite line is the last one...It seems reasonable to recommend, therefore, that researchers refrain from producing additional studies exploring the relationship between media and learning unless a novel theory is suggested.

Overall the article basically says there is no connection between media and learning. The one issue with the studies are is that they are old, none of the studies are recent, and by recent I mean in the past 30 years. Yes, 30 years, so I am bit skeptical about the salience of this article.

Tuesday, October 25, 2016

first principles of instruction

I found this reading to be interesting and salient. I haven't seen learning broken down in steps or principles quite like this, but it made sense and there were a few pieces that I really liked.

One note to make right at the start is that I don't necessarily disagree with problem solving being motivating, but I think there is more to it than just that else why is math not everyone's favorite subject, instead of one of their least?

I really like that reflection is highlighted, personally I think this is one of the most regularly overlooked principles of learning. Being able to reflect on and have to discuss and defend a new skill is key to mastery.

I also felt that the creation piece is important and underutilized. Being able to invent and explore helps the learner fully develop skills in ways that the traditional lecture model never allows for.

I am curious with the activation phase about misconceptions with previous understandings. I believe that this can be detrimental to learning in that if misconceptions are not dealt with, learners will go back to prior knowledge and won't accept the new information. What happens to a learner when new information directly conflicts with what they believe? How is this dealt with? There isn't any information in the text about how to handle this.

The demonstration phase is a great opportunity for learners to gain experience. This phase really can't be overstated, this is where skills are developed and mistakes are made. This needs to be a safe arena where learners feel comfortable making mistakes and learning from them. The reading stated that
feedback has long been recognized as the most important form of learner guidance.
and that making errors is a natural consequence of problem solving and that most learners learn from their mistakes. Key to this is helping them recognize, recover, and avoid the error in the future.

Lots of great applicable stuff from this reading.

Tuesday, October 18, 2016

Assessment

So of all the reading so far I feel this should be required reading for all teachers. I don't really want to rant but I will say that there is no blueprint anywhere when you are in teacher education programs to help with assessment and how to design a test. This chapter had a lot of great information and would be a nice place for teachers to start designing a test and determining what is included. One of the most valuable nuggets of wisdom was how to determine what mastery looks like and how to properly measure if mastery has been met. I don't think most teachers consider just how important it is to be able to answer this question. I do think that assessments would be much improved if teachers took time to take these things into consideration when they are creating a test.

One thing I thought was interesting was that 'trick' questions are not desirable. Man, I wish some of my former teachers would have read and heeded this advice. I understand the idea that in reality things aren't usually cut and dry, real life is messy. Determining if a student can transfer the knowledge is important in assessing learning, but if you are giving misinformation and compound questions are you really checking to see how well the learner performs the skill or are you doing something else entirely.

After every test is administered it should be evaluated, especially items that were missed by most of the learners. There are many reasons why a majority of students might miss a particular question, so it is imperative to find out what the reason(s) was to make necessary changes.

Overall making sure your assessment accurately assess how well a learner can perform a skill is the key to designing a good assessment. There are a lot of factors, its a process, and items can't be skipped or overlooked in order to be truly effective. Even the best planned assessments may have unforeseen consequences that need to be changed.

Tuesday, October 4, 2016

Just when you think things won't get more complicated...

Performance Objectives are actually quite near to what I did as a teacher. It is basically spelling out what students, or in this case the learner, should be able to do when they finish the unit or lesson or module. These are related to outcomes, not processes. Because it is impossible to see into someone else's brain and know if they understand the material, the objectives must be observable. There are three components to objectives: action and concept, conditions that exist while they carry out the task, and the criteria to evaluate learner performance. In this book the three areas are summed up as behaviors, conditions, and criteria. Here are my thoughts on each of these.

Behaviors
One thing to consider when writing out objectives is to ask "Could I observe a learner doing this" If answered in the negative then you need to reevaluate the objective. Also this is where terms like know and understand wouldn't work because they aren't observable behaviors. Psychomotor skills are usually easily observable, intellectual skills are more difficult. In the case of intellectual skills one needs to demonstrate they understand/know the skill. Using terms such as identify, classify, demonstrate, or generate are much better than know or understand.

Conditions
This refers to the circumstances and resources that will be available to the learner while performing the objective. There are four parts: 1 learning cue to search information in memory, 2 characteristics of resource materials, 3 scope and complexity of the task, 4 relevant or authentic contexts. One thing I think is key is the 4th function because ultimately the main point is transferring of the skills from the learning setting to a performance setting.

Criteria
Judging acceptable performance of the skill. Here is what is tricky, determining what mastery is? I think in some cases its easy because they can either do it or not, in other cases do you decide that 75% is mastery or should it be 80%. I think this could be much harder in written form such as an essay. Rubrics and checklists can be used but its still difficult to define complex criteria and acceptable responses.

Friday, September 23, 2016

In this weeks reading I got more out the examples than anything else. I thought it was interesting at the level of detail that is necessary for good analysis to take place. The one thing I am curious about is it seemed like the methods for gathering data were really only based on interviewing and observation. I am wondering with all the improvements in technology and analytics if there aren't other methods as well. One thing I can certainly say is that there is a lot more to this process than I ever would have guessed. Understanding the target population is one thing, a very big thing, but add to it managerial support, aspects of the site, relevance of the skills to the workplace, etc... Honestly after reading over all that I am bit overwhelmed, there is so much to consider and as 'details' person myself I get the need to know as much possible and that this is quite an exhaustive process to get the information that you need.

A few good tips from the chapter were to always find out what the site constraints are before starting as this may be a major problem for your instruction. If computers are needed for the instruction to take place you need to make sure the software is compatible, the network is capable of running the program, the computers are able to process what you are doing, etc... I have done trainings that have started out poorly because the projector didn't work or the wifi was slow or the program was blocked by the network filter. We couldn't do any of the training in the way that was planned because of these issues.

I did think it was important to note that as instructional designers work they go back to fine tune earlier decisions as they gather new information. I see instructional design as a process that is somewhat fluid as you are in the analysis section. You want to be able to make alterations as you find out information that will be impacted by your discoveries. I had a professor that always said, "the devil is in the details" I can definitely see that with instructional design. There are a lot of details and getting them right is what determines whether or not the instruction is effective.