Volume 42 | Number 1p2 | February 2007

Abstract List

Randall S. Brown Ph.D., Stacy B. Dale


Objective

To provide an overview of the design, research questions, data sources, and methods used to evaluate the Cash and Counseling Demonstration and resolution of analytic concerns that arose. The methodology was designed to provide statistically rigorous estimates while presenting the findings in a manner easily accessible to a broad, nontechnical audience.


Study Setting

Eligible Medicaid beneficiaries in Arkansas, Florida, and New Jersey who volunteered to participate in the demonstration were randomly assigned to receive an allowance and direct their own Medicaid supportive services as Cash and Counseling consumers (the treatment group) or to rely on Medicaid services as usual (the control group). The demonstration included elderly and nonelderly adults in all three states and children in Florida. Both age groups in Arkansas and New Jersey and the elderly adults group in Florida primarily included individuals with physical disabilities. In Florida, the children and nonelderly adults primarily included individuals with developmental disabilities. The intervention was conducted from 1999 through 2003.


Data Sources

Data included baseline and 9‐month follow‐up surveys of consumers, surveys of the primary informal caregiver and the primary paid worker for sample members, program data, interviews with program staff, and Medicaid and Medicare claims data.


Methods

Descriptive data analyses were conducted on program participation, program implementation, and the experiences of hired workers. Program impacts on consumers, caregivers, and costs were estimated using an intent‐to‐treat‐approach, comparing the regression‐adjusted means of outcomes for the full treatment and control groups. A broad set of control variables from the baseline interview and prior Medicaid claims data controlled for possible preexisting differences. Ordinal scale responses were converted to binary outcome indicators for high and for low values for ease of presentation and interpretation of effects. Two‐tailed statistical tests of the estimated effects were conducted at the .05 level. Separate estimates were provided for each state and for each age group. Sensitivity tests were conducted of the robustness of estimates to outliers (for continuous outcome measures) and to proxy use.


Principal Findings/Conclusion

The experimental design, high survey response rates, and available sample sizes led to valid, unbiased estimates of program impacts, with adequate power to detect moderate‐size impacts on most outcomes for the key age subgroups examined. For certain survey‐based outcome measures related to satisfaction with paid care, the sample had to be restricted to those who received care and those without proxy respondents who were also hired workers. Sensitivity tests suggest that these necessary restrictions were unlikely to have led to overstatement of favorable program effects on these outcome measures. The high proportions of sample members with proxy respondents reflect the frailty of the sample members. Similar rates for treatment and control groups cases with proxy respondents suggest the high use of proxy respondents has not biased the estimated program effects on survey measures.