RES 5080-375, 376

Data-Driven School Leadership

Course Calendar
(Spring Semester 2014)

Click Here for the current assignment

Date Class Activities and Assignments

Unit 1

Week of January 12

What are Data (Session 1)

What are data? First, note that the term data is a plural noun. We do make statements like,    

“The data tells us that…,” or  

"The data is obtained from...."  

Instead our statements should be,  
 

“The data tell us that…,” and  
"The data are obtained from...."

Now, getting back to the question, what are data? For our purposes, data are measures, including observations, obtained from individuals or objects. For instance, we obtain test scores from students, attitudes and opinions from teachers, and measures of environmental factors on school buildings. We also collect viewpoints about schools from parents and communities. Additionally, you are all familiar with the data collected from environmental scans and the Working Conditions Survey. These, and many other types and sources of data can provide useful and important information for educational decision-making. Throughout this course we will examine different types of data, consider how the data are (or can be) collected, and address important characteristics of the data including reliability and validity.  

For this week, however, give some thought about what types or sources of data you have access to and how you might use those data. Then post your thoughts to Forum 1 on the Data-Based Leadership ning. To assist you in thinking about data and how data can be used for decision-making, read the article by Mason (2002), Turning data into knowledge: Lessons from six Milwaukee Public Schools.

The article describes how the teachers and staff in six Milwaukee schools learned to use a variety of data sources to improve instructional delivery and student performance. Although the article addresses the use of specialized electronic information software available from CRESST, there is enough description of the types of data the schools used and how they used those data to inform their decision-making. For instance, after analyzing longitudinal local and state test data, one school planned to reallocate school resources in reading, identify low performing-students to receive additional reading resources, and hire two new reading specialists.

Another school took a different approach. They analyzed “event-based” data, a term they used to describe data concerning specific incidents or actions, rather than test data. By tracking patterns of events such as discipline referrals and attendance infractions, personnel learned more about where, when, and how often certain events that affect student behavior at school occurred. These data were used to support decisions about resource allocations.

Julie Marsh has been a frequent contributor to the literature on the data-driven decision-making in education. One of her early articles (Marsh, Pane, & Hamilton, 2006), accessible at  http://www.rand.org/pubs/occasional_papers/OP170.html, examined factors that affected teachers and administrators use of data for instructional decision-making. Read the article. You may find that many of the factors she and her colleagues identified are still relevant today.

In another early article. “Strategies to promote data use for instructional improvement: Outcomes, and lessons fro three urban districts,” appearing in the American Journal of Education (Vol. 112, No. 4, 2006), Kerri Kerr, Julie Marsh, and others, examined strategies used by three urban school districts to promote the use of data for instructional improvement. The found several factors that affected data use. A copy of the article can be found HERE. Several of the other articles appearing in the Journal may be of interest to you, also.

A more recent, special issue, of the Teachers College Record (Vol. 114, No. 11, 2012) explored the increased attention to data use in education policy and practice. One of the articles appearing in that issue is Julie Marsh's "Interventions promoting educators' use of data: Research insights and gaps." A copy of this article can be found HERE. The other articles appearing in that special issue are also relevant to this course. You may want to check out some of those articles.

As you read these papers, consider what data you, as an administrator or other local school leader, would have had your school collect if it was involved in a similar school reform effort. Then, respond to Forum 1 on the Data-Based Leadership ning. You can preview Forum 1 HERE. Before posting your response, however, read the following:

How Should You Respond to the Prompts On the Ning

When posting your responses to the ning, you need to take care with your writing. I do not tolerate poor writing. You shouldn’t either. What you post to the ning will be viewed by your fellow students. You do not want them to get the idea that you are ignorant.

If you submit a post and it contains grammatical or spelling errors—or if the prose does not flow logically—you will not be given credit for it AND you will be given an additional 4-5 page assignment.

You should construct your posts in Word, run both the spell checker and grammar checker on it, correct any mistakes, and then copy the post into the ning. DO NOT post your response as an attachment.

For additional help on writing, see the relevant section in the Syllabus.

What Is Expected In Your Response to Prompts On the Ning

The forum prompts on the ning are intended to elicit your thoughtful reaction to the readings. In your response, I will be looking for evidence that you have read the articles carefully. You do not have to agree with the authors of the articles, but if you disagree, you should be able to supply a compelling argument for your disagreement. In responding to the prompts on the ning, it is always a good idea to contribute what you learn from other readings.

It is from your response to the prompts on the ning that I make a determination as to whether you are attending to the assignments. If I am not convinced, I will grade you down.

Be sure to read the section on Written Work in the Syllabus. Your responses to the prompts on the ning should be professionally written. As I said above, I do not tolerate poorly written work. For this reason, I suggest you compose your response in word and then paste it in as a REPLY to the ning prompt.

There is a due date associated with each of the ning prompts. Generally, this is COB on the Friday following the Monday on which the prompt is given. You will then have the following weekend to post replies to other students’ initial posts to the ning prompts. You can post later than that, but I consider late posts to be worth less credit, the later they get posted, the lower the credit. I do, however, allow TWO late posts as long as they are contributed before the end of the week following the original due date.

Replying to other students' posts. While you do not always have to do so, you are expected to read and comment upon other students' posts to prompts. Your replies should be thoughtful, and should make a contribution. If you disagree with something someone else has posted, say so and explain why. Occasionally, an "atta-boy," "well done, I agree" type of reply is acceptable as long as they are held to a minimum. 

My Commentary to Your Posts to the ning.

After you have posted, I try to respond to and comment on everyone's initial posting. If you have posted on or before the due date, you are almost assured of a comment from me. Postings that are completed after the due date may or may not receive commentary from me.

 

Unit 2

Weeks of January 19 & 26 .

 

Basic Statistical Analysis

Principals (and many teachers) often have access to large quantities of data collected both within their schools and from outside sources. These data could be used, for instance, to test hypotheses about what is needed to help students improve, to study the effects of changes in teacher practices or teacher placement, or for examining correlations among student characteristics, attitudes, comportment, and achievement. These types of analyzes, however, require some competence in basic statistical analysis.  

Activities: I have been working on developing a brief manuscript on basic statistical analysis. However, I recently found, on the web, a wonderful set of basic lessons on statistics that has been developed by Dr. John W Evans of Nova University. The site (given below) contains eight lessons, seven of which are substantive (the eighth is a summary and wrap-up). 

For this week (the week of January 19) you should go work thru the first four of the lessons, We'll cover the last four next week. The forum for this week, which contains five exercise, is Forum 2a. You can preview Forum 2a HERE. You can work in groups for this forum. (I will provide answers to Forum 2a HERE on Monday, January 27).

Here is the link to the website: http://www.fgse.nova.edu/edl/secure/stats/index.htm

In computing answers to the exercises, you might find the following sites helpful:

http://www.measuringusability.com/pcalcz.php

http://ncalculators.com/statistics/t-test-calculator.htm

http://vassarstats.net/newcs.html

 I have also provided, for your use, and Excel Spreadsheet for computing an independent samples t test.

Week of January 26. By now you have had an opportunity to work through Chapters 1-4 on Dr. Evans website. In Forum 2b on the Data-Driven Leadership ning I have given you five additional exercises (you can view these HERE). Please go to Forum 2b and enter your responses/answers/comments to the prompts. You can work in groups(Answers to Forum 2a can be found HERE; answers to Forum 2b will be released on Monday, Feb. 2).

In completing the exercises in Forum 2b, you may find the discussion of Chi-square in Chapter 8 of the Vassar Stats online text,  helpful.

 

Unit 3 Week of February 2.

Intelligent Use of Standardized Assessment Data

School executives have available to them myriad sources of data on which to base school decisions. Some of these data reside at the district and state level; some arise from classroom data. Yet another source of data derives from surveys (of students, teachers, parents, and the community.

In this unit, students gain the knowledge they need to critically evaluate, analyze, and interpret standardize test data. This includes examining the reliability (including standard errors of measurement) and validity of the tests (achievement tests, aptitude tests, and other types of tests, including benchmark assessments and quarterly assessments),  and making sense of various kinds of derived scores (e.g., standard scores, percentiles, percentile bands, stanines, performance levels, and growth scale scores). Using simulated test data, students perform an item analysis, compute reliability coefficients, and develop evidence of validity. Given a technical or administrator’s manual of a commercially available standardized test, students write a brief report giving their rationale for why they would recommend adopting or rejecting the test (for a given purpose).

Activities:

As an individual knowledgeable about matters involving testing, assessment, and related data, you may be called upon to help interpret norm-referenced test results or to evaluate the technical quality of one or more assessment instruments. These might include achievement or aptitude tests published by national commercial testing companies (e.g. CTB-McGraw/Hill, Riverside, Psychological Corporation), by the state, or by your district. You may even be called upon to evaluate (or even help develop) instruments for assessing attitudes, opinions, or interests (e.g., career interests). To do an effective job at evaluating or developing assessment instruments you need to know how to go about assessing the reliability and validity of these instruments.

Begin by reading the following three articles:

Shepard, L. (1993). Evaluating test validity. Review of research in Education, 19. 405+450.

This rather long article is a seminal piece of work. I do not expect you to  peruse the contents, but you should read through the contents briefly. You need to get a sense of what validity means and how it is evaluated.

McMillan, J. (2000). Fundamental assessment principles for teachers and school administrators. Practical Assessment, Research & Evaluation, 7(8).

Also, read my brief paper on converting raw scores to scale scores.

Since internal consistency reliability is such an important technical quality of an assessment, it is instructive for you to read more about it and to attempt to estimate it yourself. Begin by reading the following article:

Iowa technical Adequacy Project (2003). Procedures for estimating internal consistency reliability.

Then, go to the Computing Coefficient Alpha Demonstration and try to compute the internal consistency reliability of the the Motivations for Reading instrument. If you need it, or to check your own work, an Excel file containing the solution is provided.

When you have completed this exercise, use Forum 3 to discuss your what you discovered and answer a few additional questions. You can preview Forum 3 HERE. Please complete this forum this week (on or before Friday). You can work in groups for this forum.

Unit 4 Week of February 9.

Analyzing Longitudinal Trend Data 

Current year data (e.g., current end-of-year test results) do not provide much useful information (except status-quo.) Useful information derives from studying trends: trends in achievement, trends in enrolment, trends in teacher mobility, and so on. Yet, examining trends can be tricky. Many times trends computed for aggregates do not follow (in fact, can deviate markedly from) trends computed for disaggregated components. This phenomena, known as Simpson’s Paradox, is one of many artifacts that can arise from ecological inference, of which users of trend data need to be aware. Furthermore, trends of true cohorts, can differ from quasi-cohorts (groups over years), and from grade level cross-sections. All these ways of examining data over years (or grade levels) can paint a different picture of the trend examined.

In this unit, students gain the knowledge they need to examine trends over time and grade levels. In the process, they gain competence in guarding against problems that can arise form from artifacts of ecological inference. Students then use a sample set of data to examine trends and then write a report interpreting what they discovered.

Activities: 

To gain an appreciation of the complexity that the analysis of longitudinal data could entail, read the first two chapters of Willett & Singer (2003): Longitudinal Data Analysis. My treatment of longitudinal data analysis will not be anywhere near as complicated, technical, or complex. 

Next, read my paper on Analyzing Longitudinal Achievement Data and then respond to Forum 4 and Forum 5 on the Data-Driven Leadership ning. Please complete these forums by Friday. NOTE: Forum 5 can be a small group activity Forum 4 is NOT a group activity.

 

Unit 5 Weeks of February 16 & 23. Fostering a Data Based Culture in the Classroom

Schools in districts from Iredell County, NC, to Grays Harbor County, WA, are finding success in having students track their own achievement data. In these schools, the walls display charts and graphs rather than “cute” posters. Having students examine their own data is one of five recommendations published by the Institute of Education Sciences (IES).

In this unit, students (principals to be) examine tested strategies and methods that foster data based decision-making in the classroom

Activities: Using Classroom Data to Improve Student Achievement  

Read the rather long article by NCEE and respond to Forum 6 and Forum 7 on the Data-Based Leadership ning. You can preview Forum 6 HERE and Forum 7 HERE This paper is filled with useful information about how data (collected at the classroom level, school level, and district level), when used effectively, can have an important impact on student achievement. NOTE: Forum 6 is due by Friday, February 21. Forum 6 will be due by Friday, February 28.

The meat of the article, for our purposes, begins on page 5 and continues through page 38. 

The authors of the report offer five recommendations (see p. 7) for establishing a framework for using data effectively in making instructional decisions. For our purposes, since this course is aimed at classroom and school level educational decisions, only recommendations 1 through 4 are important. Although the authors found only inclusive empirical evidence to support their recommendations, they point out that based on that their recommendations are based on experience and (some) research. I would point out, further, that their recommendations appear both reasonable and sensible.

In Recommendation 1, relevant to classroom decision-making, the authors suggest that teachers, after reviewing a several sources of data, formulate hypotheses about factors that contribute to “students’ success and the actions they can take to meet student needs.” [p. 10] They suggest, further, that teachers then implement those actions and test their hypotheses by collecting additional data. The authors go on to suggest ways that Recommendation 1 can be carried out.

Recommendation 2 is based on the assumption, from classroom assessment literature, that students can learn from their own mistakes and accomplishments when they have a clear understanding of what is expected of them and the receive clear, easily interpretable feedback about their performance. Again, the authors of the report provide guiding examples of how to implement this recommendation.

The third recommendation, the first of two aimed at schoolwide data use, is that schools work hard to establish a culture of data use in instructional decision-making. The authors of the NCEE report suggest that school implement a data team with representative members of the teaching faculty from all grade levels. The team should be charged with developing a clear plan of action for using data.

Recommendation 4 is that schools provide the resources necessary for developing a data-driven culture. This includes providing training and professional development related to data use and interpretation. Included in the recommendation is the suggestion that schools provide time for collaboration among teachers in using data. Technology requirements are also addressed in this recommendation. As with the other recommendations, the authors provide guidance on how to carry this recommendation out.

 

Unit 6 Weeks of March 2, 9, & 16. I want to acknowledge that you, as a group, did a great job in responding to Forum 6. I was able to respond to most of you, but found myself having little to comment upon as the responses progressed. In fact, Jamie, Ann, Paul, Cameron, Christi, Timothy, Rebecca, Donna, Ryan, Renita, Erin, and Thomas, I did not reply to your posts simply because I didn't see a need to. Like the other students in the class, you all provided excellent replies to the prompt. Any commentary I would have added would have been redundant to most of the comments I made to earlier posts. So, please forgive me for not replying individually to your posts. I did want to thank Ryan for the Roby reference, though.

Building a School-level Data Warehouse

Larger school districts across the country have long had in place large collections of longitudinal data on their students. These collections are typically the province of centralized departments of research, evaluation, and assessment. While these departments can often provide individual schools with data and results to answer general questions about teaching, learning, and assessment, their databases usually are not structured to provide quick answers to specific questions at the teacher, classroom, and student level.

In this unit, which spans this week and next, students learn the basics of constructing a longitudinal database aimed at providing the information teachers, counselors, and other school administrators need to make timely decisions concerning students, teachers, and instruction. They then, using data provided, construct a limited longitudinal database.  

Activities

Data warehousing

The idea of data warehousing, or the idea of storing large databases to aid decision-making, has been around for at least five decades (Breiter & Light, 2006). Originally, these databases were constructed to support Management Information Systems (MIS), and later Decision Support Systems (DSS). More recently, data warehouses are an important component of Business Intelligence. It is only in the past 20 years or so that school districts have begun warehousing data for decision-making. However, data warehouses have been, and are, even now, the province of large school districts, since only the nation’s largest school districts have the financial and personnel resources to host and maintain a sophisticated data warehouse.

What are some examples of how the data in a longitudinal data warehouse can be put to good use? Here is one example. It is well-known that once children are selected, on the basis of a cognitive ability test, such as the CogAT, for participation in a talented and gifted (TAG) program, they are rarely turned out of the program. Yet, it is also known, among measurement and assessment experts, that the scores obtained from such tests are fallible, especially for young children. What this means is that the scores have a certain amount of measurement error in them—sometimes a very appreciable amount of error. When, for a given student, this error is large, and positive, the student will receive a score that over predicts his or her ability level; thus admitting the student to the program erroneously. It is highly likely that such a student, if tested again, would obtain a much lower score. This results from a statistical artifact known as regression toward mean. When this occurs, we can expect measures of the child’s achievement to be lower than his or her CogAT score would predict. With longitudinal data it would be possible to monitor students’ achievement relative to their expected achievement over years in much the same way schools use EVAAS to compare actual achievement to expected achievement.

As another example, with a good longitudinal data warehouse, it would be possible to evaluate teachers on the basis of how their students perform in future years and in other coursers. On the assumption that good teaching can be exhibited by students’ future learning behaviors (e.g., attitudes toward learning, performance in later courses, etc.) With the right data warehouse, it would be possible to link students, in later years, with teachers they had in earlier years.

As I mentioned above, the construction of sophisticated complex data warehouses is an expensive undertaking. Even so, local school can, in a small, but effective way to begin constructing local data warehouses. In the readings assigned for this unit, you will learn that there are some common steps that should be followed in doing so. For instance, careful thought needs given to what data to include in the warehouse. It is not always a good idea to include data just because it happens to be available. Much of the data that might otherwise be included is not particularly useful for educational decision-making. At a minimum we need to ask ourselves, “What can we do with the data? To what use can we put it?” and “Are the data reliable? Will we be able to draw valid inferences from the data?”

Read the following articles:

Protheroe, Nancy (2001). Improving Teaching and Learning with Data-Based Decisions: Asking the Right Questions and Acting on the Answers. Educational Research Service.

Breiter, A. & Light, D. (2006). Data for school improvement: Factors for designing effective information systems to support decision making in schools. Educational Technology & Society, 9(3), 206-217.

Bergner, T. & Smith, N. C. (2007). How Can My State Benefit from an Educational Data Warehouse? Data Quality Campaign/National Center for Educational Accountability

These three articles should help you realize the importance of collecting and storing data that can help schools move toward, and take advantage of, data-based decision making.

The first of the three papers, the paper by Protheroe, starts off by providing evidence that data, especially assessment data, when used correctly, can improve instruction and learning. Protheroe points to several school districts that have experienced considerable success with raising achievement levels through data-based decision making. She offers guidance on the types of data needed and how schools and districts can best use these data.  

Breiter and Light, in their paper, begin by tracing the development of data warehouses from the early stages of management information systems (MIS) through Decision Support Sysetems (DSS), to the contemporary concepts of Business Intelligence and On-line Analytical Processing (OLAP). In doing so they examine many of the pitfalls and pratfalls that developers have encountered along the way. They then turn their attention to school and school district information systems. In particular, they describe their mixed-method study of the New York City Department of Education (NYCDOE) effort to provide a data-driven decision-making tool at the third-grade through eighth-grade levels. 

They show how both local administrators and teachers use this tool. They provide guidance on what data should be collected (and what data should probably be ignored).

The third paper, by Bergner  & Smith, will seem less relevant than the other two since it deals mainly with the development of data warehouses at the state level. In particular it examines how three states, Delaware, Maryland, and Wyoming went about designing and developing their data warehouses. Although the discussion is not directly related to local districts’ data warehouse development, the paper does offer useful insights into what it takes to develop a usable warehouse and suggests several questions that should be asked, and answered, before embarking on such an undertaking. As you read this paper, imagine yourself as a principal wanting to develop a data warehouse for your school. What would you want to accomplish by building a data warehouse? Who would you include in the design stages? Who would you expect to use the warehouse and how would you expect them to use it? What kind of training would you need to provide? What kind of software would you need (could you, for instance, build a local data warehouse using Microsoft Excel or Access)?

Respond to the following Forums on the Data-Driven Leadership ning.

The original Forum 8 is deleted. Instead, please respond (by March 14) to a Forum for Your Commentary on This Course on the Data-Driven Leadership ning. 

Forum 9. (Due by Friday, March 21) You can preview Forum 9 HERE. In responding to this prompt, I would encourage you to team up with one or two of the other students (from either section) in this class. 

 

Unit 7

This session is deleted. SKIP to Unit 8.

Intelligent Use of Classroom Assessment Data (Session 12)

Over the past decade, educators and assessment specialists alike have touted the benefits of formative assessment (often referred to as assessment for learning), wherein teachers use classroom tests to gain knowledge of students progress toward important learning targets. Validity, i.e., the validity of inferences teachers derive from their classroom assessments, is a critical factor in formative assessment. School executives can improve the educational milieu in their schools by helping teachers become better formative assessors of student progress.

In this unit, students learn to differentiate between summative and formative assessments. After studying contemporary literature on formative assessment, students are able to differentiate the formative value between classroom assessments and state- or district-provided assessments (e.g., benchmark assessments). After completing this unit, students, acting like school executives, are able to evaluate, critically, teachers’ assessment literacy.

 

Activities:

 

Unit 8 Week of March 23.

Intelligent Use of Surveys and Rating Scales (Session 13)

School-level administrators often overlook the use of surveys to gather data important to the operation of their schools. Yet, survey data (including opinionnaires and rating scales) collected from the community, teachers, parents, and students provide a way for school executives to gain important insights into matters that might be causing current problems or that foreshadow future problems for their schools. However, constructing good surveys can be a fairly technical enterprise. Unless care is taken in writing and structuring the survey the analysis and interpretation of results is often compromised. The articles assigned for this session will help you develop and write good survey instruments. The two articles by Frary are easy to read and provide much useful information. The piece by the Office of Management and Budget (OMB) is more technical, but provides details on what the government requires for federally-funded surveys.

Read the following three articles:

Frary, RB (Undated) [Brief guide to questionnaire development]

Frary, RB (1996) [Hints for designing effective questionnaires]

OMB (2006) [Standards for statistical surveys]

Work in small groups to design a short, 10-item, survey of teachers (or parents) attitude regarding the new Common Core. Submit the survey as an attachment to Forum 10, which can be previewed HERE.

 

Unit 9 Week of March 30.