In many cases the underlying theoretical relationship between the response and its factors is unknown. We can still develop a model of the response surface if we make some reasonable assumptions about the underlying relationship between the factors and the response. For example, if we believe that factors A and B are independent and that each has only a first-order effect on the response, then the following equation is a suitable model.

*R* = β_{0} + β_{a}*A* + β_{b}*B*

where *R *is the response, *A *and *B *are the factor levels, and β_{0}, β_{a}, and β_{b} are adjustable parameters whose values are determined by a linear regression analysis. We call this equation an empirical model of the response surface because it has no basis in a theoretical understanding of the relationship between the response and its factors. Although an empirical model may provide an excellent description of the response surface over a limited range of factor levels, it has no basis in theory and cannot be extended to unexplored parts of the response surface.

As shown here, we build an empirical model by measuring the response for at least two levels for each factor—indicated by the plus and minus signs in the tables—and complete a simple regression analysis. This is known as a 2^{k }factorial design because it requires 2^{k }experiments where *k *is the number of factors.

A 2^{k }factorial design can model only a factor’s first-order effect on the response. A 2^{2} factorial design, for example, includes each factor’s first-order effect (β_{a} and β_{b}), a first-order interaction between the factors (β_{ab}), and an intercept, (β_{0}); with four experiments we have just enough information to calculate the four β values.

*R* = β_{0} + β_{a}*A* + β_{b}*B* + β_{ab}*AB*

A 2^{k }factorial design cannot model higher-order effects because there is insufficient information. Here is simple example that illustrates the problem. Suppose we need to model a system in which the response is a function of a single factor. As illustrated here, a 2^{1} factorial design has but two responses, which means we can fit only a straight line to the data. To see evidence of curvature we must measure the response for at least three levels for each factor, as shown in (b).

If we cannot fit a first-order empirical model to our data, we may be able to model it using a full second-order polynomial equation, such as that shown here for a two factors.

*R* = β_{0} + β_{a}*A* + β_{b}*B* + β_{ab}*AB + β _{aa}A^{2} + β_{bb}B^{2}*

We can accomplish this using the 3^{k } factorial design shown here.

One limitation to a 3^{k }factorial design is the number of trials we need to run. As illustrated above, a 3^{2} factorial design requires 9 trials. This number increases to 27 for three factors and to 81 for 4 factors. A more efficient experimental design for systems containing more than two factors is a central composite design, two examples of which are shown here.

The central composite design consists of a 2^{k }factorial design, which provides data for estimating each factor’s first-order effect and interactions between the factors, and a star design consisting of 2*k *+ 1 points, which provides data for estimating second-order effects. Although a central composite design for two factors requires the same number of trials, 9, as a 3^{2} factorial design, it requires only 15 trials and 25 trials for systems involving three factors or four factors.