Browse Categories
Search
Mailing Lists

IBM SPSS Statistics Grad Pack 24.0 PREMIUM- 12 Month license - Windows or Mac DOWNLOAD - install on up to 2 computers


Commercial Price: $2,300.00
Price: $88.50
You Save: $2,211.50 (96 %)
Item Number: 44W59145-D
STUDENTDISCOUNTS.COM EXCLUSIVE: Includes FREE SPSS Library.  A great resource for learning how to use SPSS.

Why buy from StudentDiscounts.com?

  • Proudly Located in the USA with over 20 years of experience.  
  • Norton Shopping Guarantee.  This includes Identity Theft protection, a $1000 purchase guarantee, and a lowest price guarantee.  Don't risk getting your identity stolen by shopping anywhere else.  
  • Check out our independent reviews by real customers on the right side of our web site.  Don't purchase from any company that doesn't have independent reviews.  Don't get ripped off.
  • We can send you a backup DVD for a nominal fee in only days if you can't download the software.  
  • 24 hr support via live chat available. Phone support is also available.  Don't pay long distance to a foreign country.
  • Worried about buying the right version?  No problem! If you buy the wrong one, you can return it with no penalty.  You will receive a credit to apply to purchasing the one you need.
  • Your download comes super fast!  Try us and see.
  • Free SPSS Library. A great resource for learning how to use SPSS.
  • Low prices to support the academic community
  • A portion of our sales goes to support UNICEF - your purchase will help provide children around the world with health care, clean water, sound nutrition, education, protection from abuse and exploitation, and emergency assistance in times of crisis.

If you need to order a backup disk in addition to your download:
If you would like 5 year Software Replacement Assurance $9.99.

You may install the software on up to two (2) computers.

License is good for 12 months. If a 2 or 3 year version is needed please click here. Runs on Windows 7(service pack 2 or higher) 8 and windows 10 and Mac OS 10.10, 10.11 or 10.12(sierra)

For a comparison of all IBM SPSS versions, please click here.  No need to worry about purchasing the right version.  Exchanges are allowed!

Includes the following (see below for detailed descriptions of each add-on):

IBM SPSS Base 24  
IBM SPSS Advanced Statistics(a $1200 value) 
IBM SPSS Regression(a $1200 value) 
IBM SPSS Custom Tables (a $1200 value) - note: this add-on requires that you order the DVD.
IBM SPSS Data Preparation (a $1200 value) 
IBM SPSS Missing Values (a $1200 value) 
IBM SPSS Forecasting (a $1200 value) 
IBM SPSS Decision Trees (a $1200 value) 
IBM SPSS Direct Marketing (a $1200 value) 
IBM SPSS Complex Sampling (a $1200 value) 
IBM SPSS Conjoint (a $1200 value) 
IBM SPSS Neural Networks (a $1200 value) 
IBM SPSS Bootstrapping (a $1200 value) 
IBM SPSS Categories (a $1200 value) 
IBM SPSS Exact Tests (Windows only) 
IBM SPSS Visualization Designer (Windows only) 
IBM SPSS SamplePower (Windows only)

  • No limitation on the number of variables or cases
  • System requirements are at the bottom of this product description
PLEASE NOTE: Will not run on the following:
  • Windows Vista
  • Windows XP
  • Chromebooks
  • IPads
  • Android tablets
  • Smartphones

 New in Version 24

  • SPSS Statistics Extensions give you a new way to access and work with open source and third-party programming extensions:
    • SPSS Statistics Extensions Hub is a new interface to manage extensions. It provides an online store-like experience.
    • With SPSS Statistics Custom Dialog Builder for Extensions, it is now easier than ever to create and share extensions based on R/Python and SPSS Syntax for your customized needs.
  • A redesigned experience while importing and exporting the most popular file types enables smarter data management.
  • Many enhancements to the SPSS Custom Tables module offer improved productivity.
  • Gain deeper predictive insights from large and complex datasets.
    • Use the Temporal Causal Modeling (TCM) technique to uncover hidden causal relationships among large numbers of time series and automatically determine the best predictors.
  • Integrate, explore and model location and time data, and capitalize on new data sources to solve new business problems
    • The Spatio-Temporal Prediction (STP) technique can fit linear models for measurements taken over time at locations in 2D and 3D space.
    • The Generalized Spatial Association Rule (GSAR) finds associations between spatial and non-spatial attributes.
  • Embed analytics into the enterprise to speed deployment and return on investment.
    • Completely redesigned web reports offer more interactivity, functionality and web server support.
    • Enhanced categorical principal component analysis (CATPCA) capabilities.
    • Bulk load data for faster performance.
    • Stata 13 users can import, read and write Stata 9-13 files within SPSS Statistics.
    • Enterprise users can access SPSS Statistics using their identification badges and badge readers.
    • A wider range of R programming options enables developers to use a full-featured, integrated R development environment within SPSS Statistics.
IBM SPSS Base Overview, Features and Benefits
 
IBM® SPSS® Statistics Base is easy to use and forms the foundation for many types of statistical analyses.
The procedures within IBM SPSS Statistics Base will enable you to get a quick look at your data, formulate hypotheses for additional testing, and then carry out a number of statistical and analytic procedures to help clarify relationships between variables, create clusters, identify trends and make predictions.
  • Quickly access and analyze massive datasets
  • Easily prepare and manage your data for analysis
  • Analyze data with a comprehensive range of statistical procedures
  • Easily build charts with sophisticated reporting capabilities
  • Discover new insights in your data with tables, graphs, cubes and pivoting technology
  • Quickly build dialog boxes or let advanced users create customized dialog boxes that make your organization's analyses easier and more efficient

Descriptive Statistics

  • Crosstabulations - Counts, percentages, residuals, marginals, tests of independence, test of linear association, measure of linear association, ordinal data measures, nominal by interval measures, measure of agreement, relative risk estimates for case control and cohort studies.
  • Frequencies - Counts, percentages, valid and cumulative percentages; central tendency, dispersion, distribution and percentile values.
  • Descriptives - Central tendency, dispersion, distribution and Z scores.
  • Descriptive ratio statistics - Coefficient of dispersion, coefficient of variation, price-related differential and average absolute deviance.
  • Compare means - Choose whether to use harmonic or geometric means; test linearity; compare via independent sample statistics, paired sample statistics or one-sample t test.
  • ANOVA and ANCOVA - Conduct contrast, range and post hoc tests; analyze fixed-effects and random-effects measures; group descriptive statistics; choose your model based on four types of the sum-of-squares procedure; perform lack-of-fit tests; choose balanced or unbalanced design; and analyze covariance with up to 10 methods.
  • Correlation - Test for bivariate or partial correlation, or for distances indicating similarity or dissimilarity between measures.
  • Nonparametric tests - Chi-square, Binomial, Runs, one-sample, two independent samples, k-independent samples, two related samples, k-related samples.
  • Explore - Confidence intervals for means; M-estimators; identification of outliers; plotting of findings.

Tests to Predict Numerical Outcomes and Identify Groups:

IBM SPSS Statistics Base contains procedures for the projects you are working on now and any new ones to come. You can be confident that you'll always have the analytic tools you need to get the job done quickly and effectively.

  • Factor Analysis - Used to identify the underlying variables, or factors, that explain the pattern of correlations within a set of observed variables. In IBM SPSS Statistics Base, the factor analysis procedure provides a high degree of flexibility, offering:
    • Seven methods of factor extraction
    • Five methods of rotation, including direct oblimin and promax for nonorthogonal rotations
    • Three methods of computing factor scores. Also, scores can be saved as variables for further analysis
  • K-means Cluster Analysis - Used to identify relatively homogeneous groups of cases based on selected characteristics, using an algorithm that can handle large numbers of cases but which requires you to specify the number of clusters. Select one of two methods for classifying cases, either updating cluster centers iteratively or classifying only.
  • Hierarchical Cluster Analysis - Used to identify relatively homogeneous groups of cases (or variables) based on selected characteristics, using an algorithm that starts with each case in a separate cluster and combines clusters until only one is left. Analyze raw variables or choose from a variety of standardizing transformations. Distance or similarity measures are generated by the Proximities procedure. Statistics are displayed at each stage to help you select the best solution.
  • TwoStep Cluster Analysis - Group observations into clusters based on nearness criterion, with either categorical or continuous level data; specify the number of clusters or let the number be chosen automatically.
  • Discriminant - Offers a choice of variable selection methods, statistics at each step and in a final summary; output is displayed at each step and/or in final form.
  • Linear Regression - Choose from six methods: backwards elimination, forced entry, forced removal, forward entry, forward stepwise selection and R2 change/test of significance; produces numerous descriptive and equation statistics.
  • Ordinal regression—PLUM - Choose from seven options to control the iterative algorithm used for estimation, to specify numerical tolerance for checking singularity, and to customize output; five link functions can be used to specify the model.
  • Nearest Neighbor analysis - Use for prediction (with a specified outcome) or for classification (with no outcome specified); specify the distance metric used to measure the similarity of cases; and control whether missing values or categorical variables are treated as valid values.
  • Procedures Included:

    General linear models (GLM) – Provides you with more flexibility to describe the relationship between a dependent variable and a set of independent variables. The GLM gives you flexible design and contrast options to estimate means and variances and to test and predict means. You can also mix and match categorical and continuous predictors to build models. Because GLM doesn't limit you to one data type, you have options that provide you with a wealth of model-building possibilities.
     
    • Linear mixed models, also known as hierarchical linear models (HLM)
      • Fixed effect analysis of variance (ANOVA), analysis of covariance (ANOVA), multivariate analysis of variance (MANOVA) and multivariate analysis of covariance (MANCOVA)
      • Random or mixed ANOVA and ANCOVA
      • Repeated measures ANOVA and MANOVA
      • Variance component estimation (VARCOMP)
      The linear mixed models procedure expands the general linear models used in the GLM procedure so that you can analyze data that exhibit correlation and non-constant variability. If you work with data that display correlation and non-constant variability, such as data that represent students nested within classrooms or consumers nested within families, use the linear mixed models procedure to model means, variances and covariances in your data.

      Its flexibility means you can formulate dozens of models, including split-plot design, multi-level models with fixed-effects covariance, and randomized complete blocks design. You can also select from 11 non-spatial covariance types, including first-order ante-dependence, heterogeneous, and first-order autoregressive. You'll reach more accurate predictive models because it takes the hierarchical structure of your data into account.

      You can also use linear mixed models if you're working with repeated measures data, including situations in which there are different numbers of repeated measurements, different intervals for different cases, or both. Unlike standard methods, linear mixed models use all your data and give you a more accurate analysis.
    • Generalized linear models (GENLIN): GENLIN covers not only widely used statistical models, such as linear regression for normally distributed responses, logistic models for binary data, and loglinear model for count data, but also many useful statistical models via its very general model formulation. The independence assumption, however, prohibits generalized linear models from being applied to correlated data.
    • Generalized estimating equations (GEE): GEE extend generalized linear models to accommodate correlated longitudinal data and clustered data.
    • General models of multiway contingency tables (LOGLINEAR)
    • Hierarchical loglinear models for multiway contingency tables (HILOLINEAR)
    • Loglinear and logit models to count data by means of a generalized linear models approach (GENLOG)
    • Survival analysis procedures:
      • Cox regression with time-dependent covariates
      • Kaplan-Meier
      • Life Tables
    IBM SPSS Regression Overview, Features and Benefits
     

More Statistics for Data Analysis

  • Expand the capabilities of IBM® SPSS® Statistics Base for the data analysis stage in the analytical process. Using IBM SPSS Regression with IBM SPSS Statistics Base gives you an even wider range of statistics so you can get the most accurate response for specific data types.

    IBM SPSS Regression includes:

    • Multinomial logistic regression (MLR): Regress a categorical dependent variable with more than two categories on a set of independent variables. This procedure helps you accurately predict group membership within key groups.
      You can also use stepwise functionality, including forward entry, backward elimination, forward stepwise or backward stepwise, to find the best predictor from dozens of possible predictors. If you have a large number of predictors, Score and Wald methods can help you more quickly reach results. You can access your model fit using Akaike information criterion (AIC) and Bayesian information criterion (BIC; also called Schwarz Bayesian criterion, or SBC).
    • Binary logistic regression: Group people with respect to their predicted action. Use this procedure if you need to build models in which the dependent variable is dichotomous (for example, buy versus not buy, pay versus default, graduate versus not graduate). You can also use binary logistic regression to predict the probability of events such as solicitation responses or program participation.
      With binary logistic regression, you can select variables using six types of stepwise methods, including forward (the procedure selects the strongest variables until there are no more significant predictors in the dataset) and backward (at each step, the procedure removes the least significant predictor in the dataset) methods. You can also set inclusion or exclusion criteria. The procedure produces a report telling you the action it took at each step to determine your variables.
    • Nonlinear regression (NLR) and constrained nonlinear regression (CNLR): Estimate nonlinear equations. If you are you working with models that have nonlinear relationships, for example, if you are predicting coupon redemption as a function of time and number of coupons distributed, estimate nonlinear equations using one of two IBM SPSS Statistics procedures: nonlinear regression (NLR) for unconstrained problems and constrained nonlinear regression (CNLR) for both constrained and unconstrained problems.
      NLR enables you to estimate models with arbitrary relationships between independent and dependent variables using iterative estimation algorithms, while CNLR enables you to:
      • Use linear and nonlinear constraints on any combination of parameters
      • Estimate parameters by minimizing any smooth loss function (objective function)
      • Compute bootstrap estimates of parameter standard errors and correlations
    • Weighted least squares (WLS): If the spread of residuals is not constant, the estimated standard errors will not be valid. Use Weighted Least Square to estimate the model instead (for example, when predicting stock values, stocks with higher shares values fluctuate more than low value shares.)
    • Two-stage least squares (2LS): Use this technique to estimate your dependent variable when the independent variables are correlated with the regression error terms.
      For example, a book club may want to model the amount they cross-sell to members using the amount that members spend on books as a predictor. However, money spent on other items is money not spent on books, so an increase in cross-sales corresponds to a decrease in book sales. Two-Stage Least-Squares Regression corrects for this error.
    • Probit analysis: Probit analysis is most appropriate when you want to estimate the effects of one or more independent variables on a categorical dependent variable.
      For example, you would use probit analysis to establish the relationship between the percentage taken off a product, and whether a customer will buy as the prices decreases. Then, for every percent taken off the price you can work out the probability that a consumer will buy the product.
    • IBM SPSS Regression includes additional diagnostics for use when developing a classification table

    IBM SPSS Custom Tables

    IBM® SPSS® Custom Tables helps you easily understand your data and quickly summarize your results in different styles for different audiences.

    More than a simple reporting tool, IBM SPSS Custom Tables combines comprehensive analytical capabilities with interactive table-building features to help you learn from your data and communicate the results of your analyses as professional-looking tables that are easy to read and interpret.

    • Compare means or proportions for demographic groups, customer segments, time periods or other categorical variables when you include inferential statistics
    • Select summary statistics - from simple counts for categorical variables to measures of dispersion - and sort categories by any summary statistic used
    • Choose from three significance tests: Chi-square test of independence, comparison of column means (t test), or comparison of column proportions (z test)
    • Drag and drop variables onto the interactive table builder to create results as pivot tables
    • Preview tables in real time and modify them as you create them
    • Exclude specific categories, display missing value cells and add subtotals to your tables
    • Export tables to Microsoft® Word, Excel®, PowerPoint® or HTML for use in reports

    IBM SPSS Custom Tables is an analytical tool that helps you augment your reports with information your readers need to make more informed decisions.

    Use inferential statistics—also known as significance testing—in your tables to perform common analyses: Compare means or proportions for demographic groups, customer segments, time periods, or other categorical variables; and identify trends, changes, or major differences in your data. IBM SPSS Custom Tables includes the following significance tests:

    • Chi-square test of independence
    • Comparison of column means (t test)
    • Comparison of column proportions (z test)

    You can also choose from a variety of summary statistics, which include everything from simple counts for categorical variables to measures of dispersion. Summary statistics are included for:

    • Categorical variables
    • Multiple response sets
    • Scale variables
    • Custom total summaries for categorical variables

    When your analysis is complete, you can use IBM SPSS Custom Tables to create customized tabular reports suitable for a variety of audiences—including those without a technical background.

     

    IBM SPSS Data Preparation Overview, Features, and Benefits

    IBM® SPSS® Data Preparation gives analysts advanced techniques to streamline the data preparation stage of the analytical process. All researchers have to prepare their data before analysis. While basic data preparation tools are included in IBM SPSS Statistics Base, IBM SPSS Data Preparation provides specialized techniques to prepare your data for more accurate analyses and results.

    With IBM SPSS Data Preparation, you can:

    • Quickly identify suspicious or invalid cases, variables and data values
    • View patterns of missing data
    • Summarize variable distributions
    • Optimally bin nominal data
    • More accurately prepare your data for analysis
    • Use Automated Data Preparation (ADP) to detect and correct quality errors and impute missing values in one efficient step
    • Get recommendations and visualizations to help you determine which data to use

Expand your Data Preparation Techniques with IBM SPSS Data Preparation

  • Use the specialized data preparation techniques in IBM SPSS Data Preparation to facilitate data preparation in the analytical process. IBM SPSS Data Preparation easily plugs into IBM SPSS Statistics Base so you can seamlessly work in the IBM SPSS environment.

Perform Data Checks

  • Data validation has typically been a manual process. You might run a frequency on your data, print the frequencies, circle what needs to be fixed and check for case IDs. This approach is time consuming and prone to errors. And since every analyst in your organization could use a slightly different method, maintaining consistency from project to project may be a challenge.

    To eliminate manual checks, use the IBM SPSS Data Preparation Validate Data procedure. This enables you to apply rules to perform data checks based on each variable's measure level (whether categorical or continuous).

    For example, if you're analyzing data that has variables on a five-point Likert scale, use the Validate Data procedure to apply a rule for five-point scales and flag all cases that have values outside of the 1-5 range. You can receive reports of invalid cases as well as summaries of rule violations and the number of cases affected. You can specify validation rules for individual variables (such as range checks) and cross-variable checks (for example, "retired 30 year-olds").

    With this knowledge you can determine data validity and remove or correct suspicious cases at your discretion before analysis.

Quickly Find Multivariate Outliers

  • Prevent outliers from skewing analyses when you use the IBM SPSS Data Preparation Anomaly Detection procedure. This searches for unusual cases based upon deviations from similar cases, and gives reasons for such deviations. You can flag outliers by creating a new variable. Once you have identified unusual cases, you can further examine them and determine if they should be included in your analyses.

Pre-process Data before Model Building

  • In order to use algorithms that are designed for nominal attributes (such as Naïve Bayes and logit models), you must bin your scale variables before model building. If scale variables aren't binned, algorithms such as multinomial logistic regression will take an extremely long time to process or they might not converge. This is especially true if you have a large dataset. In addition, the results you receive may be difficult to read or interpret.

    IBM SPSS Data Preparation Optimal Binning, however, enables you to determine cutpoints to help you reach the best possible outcome for algorithms designed for nominal attributes.

    With this procedure, you can select from three types of binning for pre processing data:

    • Unsupervised -- create bins with equal counts
    • Supervised -- take the target variable into account to determine cutpoints. This method is more accurate than unsupervised; however, it is also more computationally intensive.
    • Hybrid approach -- combines the unsupervised and supervised approaches. This method is particularly useful if you have a large number of distinct values.

    IBM SPSS Missing Values

IBM® SPSS® Missing Values is used by survey researchers, social scientists, data miners, market researchers and others to validate data.

Missing data can seriously affect your models – and your results. Ignoring missing data, or assuming that excluding missing data is sufficient, risks reaching invalid and insignificant results. To ensure that you take missing values into account, make IBM SPSS Missing Values part of your data management and preparation.

Uncover Missing Data Patterns

    • Easily examine data from several different angles using one of six diagnostic reports, then estimate summary statistics and impute missing values
    • Quickly diagnose serious missing data imputation problems
    • Replace missing values with estimates
    • Display a snapshot of each type of missing value and any extreme values for each case
    • Remove hidden bias by replacing missing values with estimates to include all groups ¬– even those with poor responsiveness

Uncover Missing Data Patterns

  • With IBM SPSS Missing Values, you can easily examine data from several different angles using one of six diagnostic reports to uncover missing data patterns. You can then estimate summary statistics and impute missing values through regression or expectation maximization algorithms (EM algorithms).

    IBM SPSS Missing Values helps you to:

    • Diagnose if you have a serious missing data imputation problem
    • Replace missing values with estimates -- for example, impute your missing data with the regression or EM algorithms