Abstract
Serious information security vulnerabilities are
discovered daily and reported from already deployed
software products. Customers have no feasible means for
estimating the security level of the products they
purchase. The few generally applicable methods require
the source code, which is often not delivered with a
product. Many of the reported vulnerabilities are
robustness problems. Robustness can be functionally
assessed without the source code by injecting anomalies,
unexpected input elements, to the tested component. The
component passes the tests if it can securely handle the
injected anomalies.
The methods generally applied for software testing and
modelling were found to be too complex and rigid for
functional robustness assessment. A new mini-simulation
method using attribute grammar to model both input syntax
and software behaviour was proposed. Means for the
systematic creation of a large number of test cases was
presented. The method was used to test the robustness of
49 software products. A total of 41 tested products were
found to be vulnerable to denial-of-service problems, and
14 of them were proven to contain vulnerabilities making
it possible to execute remotely supplied code on the host
system.
Applications of the method include quantitative
comparisons and the benchmarking of software components,
but it has some limitations. The proportion of the flaws
found using the method compared to the actual number of
flaws is difficult to assess and the tests may favour
some components over others. However, if the method can
help to eliminate the most obvious vulnerabilities, it
would be much more difficult to find serious flaws using
unsystematic methods. This could cut down on the number
of publicly disclosed vulnerabilities.
Original language | English |
---|---|
Qualification | Licentiate Degree |
Awarding Institution |
|
Supervisors/Advisors |
|
Place of Publication | Espoo |
Publisher | |
Print ISBNs | 951-38-5873-1 |
Electronic ISBNs | 951-38-5874-X |
Publication status | Published - 2001 |
MoE publication type | G3 Licentiate thesis |
Keywords
- information security
- automated testing
- software quality
- implementation vulnerabilities
- programming mistakes
- mini-simulation method