Marriott Hotel, Amsterdam
Monday 10 September to
Wednesday 12 September 1984
Nederlands Instituut
van Registeraccountants
Idcjo Jht 1^8^/ ci
' • »
• :0...
téraccoMnta(j|B '-^
^jirde ingesteld bij wet van 28 juni 1962 .^~:
: H Ï Ê J
iW
••iOf;
amsterdam - buitenvéldert'' ii<' postgH-o: 62915 %:..-
AW/yk/01061
ï&ternktïionfl ,Sytiposiiim^..Q^iij4|^ . .%v^ '^
i a an Advanced GompleXjjCompterié'ed - ^' - '• • ï
Ei^ronment,. ' '^•'•"\,,i:i.. ''' ' ••«-•^fiV';':-
•••OIÏE:- •*••
22nd August: 1^84
. ' t »
HJ£DEfiUWDS INSTITUUT
REGISTERACCOUNTANTS
BtBLIOTHEEK
k»#
Dear participant, -
' . . r r ' . ' / - ' - ^^"• :.• - r . . , . -.^ï • .!>ov-. aj^,%v;,,. •••
liël:ewtt-h' We a r e sending you the documentation f(>if-~jNIVSAJs abpv© nÊnficined
.djiteraatiónal Symposium. Please fta<i e^njalosed the following documentation:
1. The t a p e r s on tihe subjects of the'.syiSjpos'iua ' "V ' ^ n^
a / ' - ' t h e EDP A u d i t o r ' s Role in Development of Distributed Systems
•..by J a a e s H . David • • ,-/ 4;. .0^.,'^ ." -^ •
Risk Analysis in a Computer Environment by Exilc^.Gul<^^'|;fiI^6. ,i,: ;>« "
'Security and-audit o:^ opieratlag< systentif (inclu4^|^. mahiï'facturers'
••
1-
b .
c.
d.
e«
fv
g-h.
u t i l i t i e s ) 'by Herman Roosu,y ?- 'm i." ^ J.-y: iad
"DiStTiïStéd'^databases» coiitf'd^'-i^^^Mtïions'•and the audit approach
'by Hans^L&Miaars .:-:-*v;;:' .•''••.. ' ' ,..!-ii .•';.f>e>;' .; • >•
Electronic'Funds: t r a n s f e r Systems and E l e c t r o o i c Moii^y:. Control and
-AöÖit I m p l i t a t i o n s by Michel .Leg«!r3
ffse of micri
The Contrói*'and Audict:- imp^ife èf Eourth G
by Nick Pasricha ,'••• z .. IN» • 'iv
The Audit Approach: Systems BaseS or Transaction Directed
byRodneyC.i:'. Perry
'ip'gómpatèrs in the Tai!udito^»loc«ss by:.WllHap-C. Mair
L*'and Audict:- ^öSP^ife'^f Eourth (Jeneratioa L££#|üages
ixc^
•j :;n.
•é
ivi.' ••sné-
You are «xpectè'd to read .thes^-^Spers as they* will form the basis for the
discussioné ana/or workshop sessions.
' • ' ' • • . i •. * •• '
'. •• ,• •• ' •• • V • • '"•- > The curricula vitae of the speakers. - .„
3. The'case studyv ^ T . , ^ ^ .'If. ^-il3S« •! : * ;
The Di s t r i # a ^ . e a s e ö t ü d y , ^ s been.4ööt to^ s ^ e speaker s as g u i d e l i n e |i^_^^ •;^«£^ .
•'Trrepatlng t i ^ i ^ ' p a p è ^ case s^udy i s to i^^i-o^^^'- ••
'•• lil Wörkifö^y 'aïiÖ^plenary? d^ af a p r a c t i c a l environm!^£,;^r- '-.i =
-:''agairist T^ich. i<fëas "arid.t^^ 4e^elope± dujpngpthe sj^BjposiuiftjRajrS;!**.. fv-ie|"S^'
'•'tested, thé computêrise<3^ aspects %^fi;.the adm4r»<§3trative systems^,^t^!^ ..,, : ^ •"^--"^'
"^'hardware cöft^iêuratiotiö and t h e network prcktedüres are of part±ciilar° -*» -, -V
. /2
ft
I
.;««;
^ -..T,. ,. -,*•--.••••<• ;• • ! '.' * --J.'.1!^-~"- " •••"'••
: algemene bank nedetland, a. i. emst^rafrt, amsterdam, no.: S4.93.:l0.81d
niura n^erlands instituut.;.
van regjsteraq;^flt^»
blad: 2 datum: ,22nd,. Ai^gpst-1984
onzeref.: AW/yk/OlOGl uw ref.:
interest to the symposium. The description of the bvislness of Dis;trimax
only serves as background to the described-computer environment.
4. A list: of participants. ^
5. A brochure of the Amsterdam Marriott Hotel.
6. The final programme.
Registration
For foreign participants the registration will be.opened on Sunday night 9th;
September from 19.00 to 20.00 p.m.
Monday morning 10th September registration connnenees at 8.15 a.m.
Address
The symposium will be held in the Amsterdam Marriott Hotel. The-;address is4
Stadhouderskade 21, Amsterdam, The Netherlands. The hotel itseif has only
modest parking facilities but can easily be reached.by public transport:
1. from Schiphol Amsterdam Airport: by taxi or;-by train to •Amsterdam Zuid
Station and bus 66 to Leidseplein;
2^ from Amsterdam Central Station by trams l*,.?. and 5 and bus 67 to
Leidseplein. ' _ .
For any further information you can contact the. Secretary of the Symposium
Committee:
Mr. A.J.M. Werring..-
• i NIVRA ' ": . :
P,0'. Box 7984
1Ó08 AD Amsterdeio
The Netherlands- .^
telex: 18867 NIVRA NL
tel.: 020-44Ó222. ,„
We hope that you will enjoy your attendance of the symposium and your visit tp.
The Netherlands and are looking forward to meeting you in Amsterdam.
Sincerely yours,
r
A.J.M. Werrdrag
(Sïcretary Symposium Committee)
Eöclosures
s.i«;v?'-';;..'untat"
(O o
3
3i^
o
fi)
General Information
t . >neo
/l-i:.V
V JU O.
^g'jage.^
:
• ; •
1.^
» : : •
, > . - . ; •
•.•; • 'V
;. ••: ••• ,^'X;.Ufc™,ï •;#•.
f •• .^n >-iy ."jrf v... 'i^,
i -, l i ;•!, ' ;f-:>!:t?
• , i-i /.iat ;-.tï'^
.. ^ - .• ••;? ^'^ If.; .::
ï v ; • = / • - ' , ;••
i « . . . ; • • 'fiiir^- .-"-^ .. . : ;. .5 •
'o-r -
>.-;.«iti-i
»
» ' •
niura
International Symposium on Auditing in an Advanced Complex
Computerised Environment
10-12 September 1984
Marriott Hotel, Amsterdam
Symposium president:
Symposium committee:
Speakers:
Debaters:
prof. dr. A.B. Frielink •
F. Dankmeijer RA
prof. dr. A.B. Frielink
H. Lafèbre
C.R.M. Petit"
A.J.M. Werring (secretary)
H. Wiegers RA (chairman)
James H. David
Eric Guldentops
Michel Léger
Hans Leenaars
William C. Mair
Nick Pasricha
Rodney C.L. Perry
Herman Roos
prof. L.C, van Zutphen
prof. L.A. van Hulsentop
drs. J.Chr. van Dijk
John Ford
Mrs. M.E. van Biene-Hershey
MacLain Pont
R.J.M. van der Horst
drs. F. Hoek
drs. J.M. Vermeer
niura
PROGRAMME INTERNATIONAL SYMPOSIUM ON AUDITING IN AN ADVANCED COMPLEX
COMPUTERISED ENVIRONMENT 10-12 September 1984, Marriott Hotel, Amsterdam
Monday 10 September 1984
09.00 - 09.15 hrs.
Opening and brief presentation case study Distrimax (C.R.M. Petit)
09.15 - 10.50 hrs.
Critical aspects of internal control and the role of the (EDP-)auditor in the
development of a large scale complex distributed system (James H. David).
- speaker 30 minutes
- debaters 2 x 10 minutes 20 minutes
- plenary discussion 40 minutes
10.50 - 11.20 hrs.
Coffee break
11.20 - 13.00 hrs.
Impact of computerization on audit risk analysis (Eric Guldentops).
- speaker 30 minutes
- debaters 2 x 15 minutes 30 minutes
- plenary discussion 40 minutes
13.00 - 14.30 hrs.
Aperitif and lunch
14.30 - 17.00 hrs.
Security and audit of operating systems (including manufacturers' utilities)
(Herman Roos).
- speaker 30 minutes
- workshops 80 minutes
- plenary session 40 minutes
19.00 - 20.00 hrs.
Reception by Council of NIVRA
- 1 -
from 20.00 hrs.
Symposium dinner with dinner speech by symposium president prof.dr. A.B.
Frielink
Tuesday 11 September 1984
08.30 - 10.30 hrs.
Distributed data-base systems; control implications and the audit approach
(Hans Leenaars).
- speaker 30 minutes
- workshops 60 minutes
- plenary session 30 minutes
10.30 - 11.00 hrs.
Coffee break
11.00 - 12.30 hrs.
Control and audit implications of Electronic Funds Transfer Systems (Michel
Leger).
- speaker 30 minutes
- debaters 2 x 10 minutes 20 minutes
- plenary discussion 40 minutes
12.30 - 14.00 hrs.
Aperitif and lunch
14.00 - 16.00 hrs.
Demonstrations of hard- and software by major vendors
16.00 - 17.30 hrs.
Use of micro-computers in the audit process (William C. Mair).
- speaker 30 minutes
- debaters 2 x 10 minutes 20 minutes
- plenary discussion 40 minutes
17.30 - 18.00 hrs.
Presentation by IFAC's Subcommittee on Auditing in an EDP-Environment (Luc
van Zutphen).
Wednesday 12 September 1984
09.00 - 10.30 hrs.
The Impact of the use of fourth generation languages on the internal control
and audit (Nick Pasricha).
- speaker 30 minutes
- workshops 50 minutes
- plenary session 20 minutes
10.30 - 11.00 hrs.
Coffee break
11.00 - 13.00 hrs.
Audit approach: system oriented versus transaction oriented in a large scale
complex distributed environment (Rod C.L. Perry).
- speaker 30 minutes
- workshops 90 minutes
13.00 - 14.30 hrs.
Aperitif and lunch.
14.30 - 15.30 hrs.
Audit approach continued
- plenary session 60 minutes
15.30 - 16.00 hrs.
Wrap up, conclusions, closing
International Symposium on Auditing in an Advanced Complex Computerised Environment
10-12 September 1984
Marriott Hotel, Amsterdam
List of participants
Name
Anderson, A.
Apeldoorn, H.
Apers, drs. J.C.E.M.
Arentshorst, T.
^Arnold, R.K.
Arthur, G.
Baker, G.
Biene-Hershey, mrs.M.E. van
Brooke, Q.
Brouwer, J.
Brouwer, J.
Bruggink, J.C.
Brummelhuis, drs.H.H. ten
Burbidge, P.W.J.
Christie, J.N.
Commissaris, M.A.C.
^^Court, J.
Dekker, F.
Diekema, J.H.
Dijk, drs.J.Chr. van
Eek, A.J. van
Eggleston, J.
Eijkenaar, drs.A.
Farstad, S.
Ford, J.
Franck, J.
Goosen, J.
Country
The Netherlands
The Netherlands
The Netherlands
The Netherlands
United Kingdom
Canada
Canada
The Netherlands
United Kingdom
The Netherlands
The Netherlands
The Netherlands
The Netherlands
The Netherlands
United Kingdom
The Netherlands
United Kingdom
The Netherlands
The Netherlands
The Netherlands
The Netherlands
United Kingdom
The Netherlands
Norway
South Africa
Denmark
The Netherlands
Representing
Ernst & Whinney Nederland
AZVU Amsterdam
GCEI
PTT
Spicer and Pegler
CICA
CICA
Amro Bank
Arthur Young
de Tombe/Melse & Co
Hollandsche Beton Groep nv
KLM
Amro Bank
Coopers & Lybrand Nederland
Touche Ross & Co
Moret & Limperg *
ICAEW
Min.V.Landbouw en Visserij
PTT
Coopers & Lybrand Nederland
Ned. Accountants Maatschap
KMG/Thomson McLintock
Dijker en Doornbos/accountants
Fellesdata a/s
Univ. of Witwatersrand
I/S Revisorgruppen
Ned. Accountants Maatschap
- 1 -
^Grove, Chr.
Ham, H.W.F, van den
Harmens, drs.J.P.
Hampland, B.
Heales, J.
Hesp, drs.F.A.
Hoek, drs.F.
Hoogenhuijze, drs.ing.J.van
Hoogstraten, H.J.
Hooit, drs.H.
Huijssen, B.J.W. van den
Huizenga, drs. J.E.
Hulsentop, prof.L.A.van
^Plunt, B
Jacobsen, P.
Jansen, F.P.
Jones, S.
Kaiser, E.M.
Kampert, drs.H.A.
Kemnitzer, R.
Keohane, T.
Keuker, drs.H.H.
Kikkers, R.
Kleuver, drs.H.J.
^Kloppenburg, G.P.T.M.
Knutsen, R.
Koning, drs.W.F. de
Koogh, drs. C.G. van der
Kooyman, J.
Krottje, C.
Laman, drs. W.J.
Lange, drs.H.de
Leeuwen, J.J. van
Lie, S.O.
Lisiewicz, mrs. C.
List, W.
United Kingdom
The Netherlands
The Netherlands
Norway
South Africa
The Netherlands
The Netherlands
The Netherlands
The Netherlands
The Netherlands
The Netherlands
The Netherlands
The Netherlands
South Africa
Norway
The Netherlands
Sweden
The Netherlands
The Netherlands
West-Germany
United Kingdom
The Netherlands
The Netherlands
The Netherlands
The Netherlands
Norway
The Netherlands
The Netherlands
The Netherlands
The Netherlands
The Netherlands
The Netherlands
The Netherlands
Norway
West-Germany
United Kingdom
KMG/Thomson McLintock
Philips International BV
Min. van Defensie
Fellesdata a/s
Anglo American Corp. of SA Ltd
Amro Bank
Van Dien + Co
PTT , . \ •
Van Dien + Co
Ahold NV
Shell Int.Petr. mij. BV
KMG/Klynveld Kraayenhof & Co
Ned.Philips Bedrijven BV
Anglo American Corp. of SA Ltd.
Fellesdata a/s
Verlinden Wezeman Org.Adviseurs
Hagstrom & Sillén AB
Ace.kantoor van de VNG
Dijker en Doornbos/accountants
Treuhand-Vereinigung AG
Arthur Young McClelland Moores
Co
Ace. kantoor der VNG
Moret & Limperg
Moret & Limperg
Van Dien + Co
Fellesdata a/s
Paardekooper & Hoffman
Deehesne van den Boom en Co
KMG/Klynveld Kraayenhof & Co
Ned. Accountants Maatschap
Keuzenkamp & Co
Van Dien + Co
Ned. Accountants Maatschap
Revisor Centeret
Arthur Young GmbH
KMG/Thomsom McLintock
- 2 -
^Lock, P.G.
Lutz, H.
Martinez, E.
Merz, R.
Methorst, drs.J.
Mol, A.
Moll, S.
Montfort, G.
Mooij, R
Moonen, H.B.
Morgan, A.
Morris, L.
^PMukarakate, F.
Naess, T.
Neisingh, A.W.
Nienhuis, D.
Nierop, drs. ing.L.B. van
Nilsen, J.E.
Oakley, A,
Omtvedt Moen, J.
Oorschot, H. van
Otter, D.C. den
Paans, J.M,
Passchier, P.
^kPutten, drs.H.M.I. van der
Quisquater, L.
Redford, N.
Riordan, J.
Roseth, P.Chr.
Rutten, J.W.J.M.
Saers, J.A.
Sander, J.G.
Scheffler, H.J.
Scholman, J.H.
Schuijt, H.
Starre, J. van der
The Netherlands
Switzerland
Spain
Switzerland
The Netherlands
The Netherlands
West-Germany
United Kingdom
The Netherlands
The Netherlands
United Kingdom
West-Germany
Zimbabwe
Norway
The Netherlands
The Netherlands
The Netherlands
Norway
United Kingdom
Norway
The Netherlands
The Netherlands
The Netherlands
The Netherlands
The Netherlands
Belgium
South Africa
South Africa
Norway
The Netherlands
The Netherlands
The Netherlands
West-Germany
The Netherlands
The Netherlands
The Netherlands
Nationale Nederlanden NV
Allgemeine Treuhand A.G.
Espa Consultores, S.A.
Allgemeine Treuhand A.G.
AKZO N.V.
PTT
Deutsche Warentreuhand- und
Kontinentale Treuhand AG
Touche Ross & Co
Ned. Accountants Maatschap
KMG/Klynveld Kraayenhof & Co
ICI PLC
Arthur Young GmbH
Air Zimbabwe
Fellesdata a/s
KMG/Klynveld Kraayenhof & Co
SHV Holdings nv
KLM
Fellesdata a/s
Ernst & Whinney
Fellesdata a/s
Van Dien + Co
PTT
de Tombe/Melse & co
Nationale Nederlanden NV
Camps Obers & Co
Van Damme, Riskê & Co
Arthur Young & Co
Anglo American Corp. of SA Ltd
Fellesdata a/s
Ace. Kantoor van de VNG
Moret & Limperg
Ned.Philips Bedrijven BV
Deutsche Treuhand-Gesellschaft AG
Paardekooper & Hoffman
Ahold NV
Peat Marwick Nederland
^^Starreveld, K.
^^Steeman, prof.D.
Steffen, A.
Stibbe, drs. E.
Swinson, Chr.
Timmerman, G
Toet, mrs. I.
Trausehke, drs.J.H.
Urbanus, J.H.
Veen, mrs. drs.I.van der
Veenman, P.
Veenstra, drs.R.H.
^Pvermeer, drs.J.M.
Verseveld, N. van
Volck, G.R.
Voormeulen, R.
Whyte, J.
Wijek, J.F. van
Zanten, drs.J.H.van
Zwart, H. de
The Netherlands
The Netherlands
The Netherlands
The Netherlands
United Kingdom
Belgium
The Netherlands
The Netherlands
The Netherlands
The Netherlands
The Netherlands
The Netherlands
The Netherlands
The Netherlands
West-Germany
The Netherlands
Belgium
The Netherlands
The Netherlands
The Netherlands
Mln.van Landbouw en Visserij
KMG/Klynveld Kraayenhof & Co
Van Dien + Co
PTT
Binder Hamlyn
Berger Block Kirschen
Schellekens & Co
Peat Marwick Nederland
PTT
KMG/Klynveld Kraayenhof & Co
PTT
PTT
Ned. Accountants Maatschap
Dijker en Doornbos/accountants
Moret & Limperg
IDW
Min. van Financiën
Ernst & Whinney
Verenigde Spaarbank
Moret & Limperg
Min. van Financiën
- 4 -
'case study DISTRIMAX
X
<
Q
•a
3
(/>
(D
V)
(0
Ü
••••• •••••
•••••• ••••••••••• • ••••• ••••••• ••• ••••••• ••••• ••••••• •••••• ••••• •••
••• ••• ••• ••• ••• ••• ••• ••
• • •••
• • •••
•••••• •••• ••• • ••••
• •••
•••••••
••••••
•••••• ••• ••• ••• • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • • * • ••
• • • • • • • •• • •• •• • •• •••• • • • • • • • • • • • • • • • • • < • • • • • • • • • • • • i •••••••••• ••••«•• ••••• ••• ••• •• •••• •• •••• •• •••••••• •• •••• •••••••• ••••••• •••••• • ••••••••••••• ••••••••••••• • ••••• •••••• •••••• •••••••••••• •• ••*•••••• ••••• •••••• ••••
••• ••• ••• ••• •••
••• ••••• ••• ••• •••
•••••••••••••• •••••••••••••• ••• ••• ••• ••• •••••• • ••«• ••• •• ••••• ••• ••••••• ••• •••
••••••• ••• •••
• • • • • " " * • " "
••••••• ••• •••
••••••• ••• •••
••••••• •••
••• ••• •••••• •••• • ••• ••• ••• •••
• • • • • • • • • • «I ••••••
••••••
••• ••• ••• ••••• • ••••••••••• ••••••• ••• ••
•••••• •••••• ••••• •• ••• ••• •••••• • • •< ••• ••• •••••• ••<
•• ••• •••
•••••• ••• •••
••••••• •• ••• ••• ••••••
••• ••• ••• ••• «O
• •••
• ••••
•••••• ••• • •• •
••• ••• •••• •••••••• • •••••••••• ••••••••••••• ••• ••••••• a«« «««
•••••• • •• •••••••• •••• ••• •••••••• ••••••• • •••••••• •• ••••• •••••••• •••• •a» « •«•«•« ••• ••••••• •• •••• • •••• •• • •• •••• • •• •••• • •••• •• • • ••••• • •••••• • •••• •• • ••••••• • •••••• • •• •• ••••••• ••••••• ••••••• ••• ••• ••• ••••••• ••• ••••••• a«« c«s
••••••• ••••••• ••• ••• ••* ••••••• ••• ••••••• »«« ««9
••••• ••••• •*• ••• ••• ••• •••••• ••• •••••• ••• ••»
• •••• ••••• ••• •••
• ••
•••••• • •••• ••••
• ••
•••
••••• ••• ••• ••• ••• •• ••••• i__
••••••• ••• ••• ••• •••••• ••••••« ••••••• ••••••••••• ••••••• ••••••• •«••«
•••••• ••• ••• •
••• ••••• • ••••• • •••••• •• ••• ••• ••• • •• ••• ••< ••••••• •• • •••••••••• • •••• •••• • •••••••••'••••• •• •••• ••• •••••••• •• •••••• •• •• «• «
••••••• ••• ••• ••••••• ••• ••• • • ••• ••• ••• ••< ••• ••• • •« ••••••• ••« ••• ••• •••
••• ••• ••• ••••••• ••• ••• •••••
••• ••••• ••• ••• ••• ••••• • ••• •••••
••• ••• ••••
CASE STUDY N.V. DISTRIMAX
niura Nederlands Instituut
van Registeraccountants
1. GENERAL DESCRIPTION OF DISTRIMAX
1.1. The company and its objectives
1.2. Relationship between Head Office, Branches and DIY's
1.3. Some additional facts
1.3.1. Numbers
1.3.2. Other
1.4. Organisation
1.5. Automation and computerisation
2. ADMINISTRATIVE PROCEDURES AND COMPUTER TECHNOLOGY
2.1. Environment
2.1.1. Configuration
2.1.2. General
2.1.3. EDP Department structure
2.1.4. EDP Department procedures
2.1.5. Communications network
2.1.6. Restart, Recovery and Back-up facilities
2.2. Centralised vs. Decentralised Processing
2.2.1. Image processing
2.2.2. Buying
2.2.3. Goods receiving
2.2.4. Sales
2.2.5. Stocks
2.2.6. Debtors
2.2.7. Creditors
2.2.8. Financial administration
GENERAL DESCRIPTION - DISTRIMAX
The Company and its objectives
Distrimax N.V. is a wholesaler and retailer in household goods and
machinery. The company, whose head office is situated in the Randstad,
Netherlands, has experienced considerable growth since its foundation
in 1949. Turnover is currently approximately 300 million guilders.
The overall objective of the company is to gradually increase
market share while maintaining reasonable profitability.
The strong growth experienced is largely due to aggressive management
who feel that declining profit margins and increasing competition
necessitate continuous adaption of management policies. Two basic
principles guide management:
- good service to customers
- product differentiation.
Management has made the following observations with respect to
these two principles:
Good service to customers
The distance between Distrimax and its customers must be reduced as
much as possible so that service consists not only of good quality
products, but also optimises direct contact with customers (companies
and individuals) and speed of delivery. To this end, the company began
several years ago to geographically decentralise its organisation.
Today, there are twelve stockholding branches over the whole
country. Attached to each branch are a number of "Do-it-yourself"
shops (DIY's). The total number of these shops is 50. Distrimax
strives to maintain the greatest possible independance of each of
branch.
Product differentiation
Distrimax has learnt in the past that expansion of market share
with only one product line does not provide sufficient growth possibilities
and renders the company vulnerable to competition.
Product differentiation is thus an integral part of company strategy.
Factors behind this strategy are as follows:
- the fact that producers which the company has been dealing with for
many years are able Co offer a continuously broadening range of
products;
- basic changes in the wholesale and retail markets whereby the accent
lies increasingly on comprehensive service.
-2-
The product range of Distrimax falls into four categories:
- household goods such as washing machines, refrigerators, ovens, fire
places, geysers and mixers;
- machinery such as drills, milling machines, electric motors and
compressors;
- fittings such as sanitary goods, locks, bolts and hinges, microcomputers
1.2. Relationship between Head Office, Branches and DIY's
Characteristics:
- Buying function: partly centralised, partly decentralised.
Sales: fully decentralised. i
Head Office has a limited distribution function and therefore
maintains a central store primarily for goods with low stock
turnover. As far as possible, suppliers deliver directly to branches
and DIY's.
- Administrative data processing takes place to a great extent at the
point of sale.
- Head Office lays down a budgeting system as well as a system of
periodic reporting.
- Uniform administrative procedures
- Branches are primarily responsible for their own stock and debtors
positions and for the DIY's attached to them. Head Office provides
guidelines for stock levels and credit limits but is particularly
concerned with branch turnover and profitability.
- There is a high level of automation. The company has a mainframe
computer at Head Office with smaller computers and terminals at
branches and DIY's.
1.3. Some additional facts
1.3.1. Numbers
. Products : some 30.000 product lines, mostly spares
and fittings of which about 90% are high
turnover i t ems.
. Suppliers : 2.000
. Customers : 10.000 companies and a large number of DIY
customers. I
-3-
. Purchase invoices: 24.000 per year
. Orders : 500.000 per year through the branches
. Sales invoices : 550.000 per year with an average of 10 lines
on each invoice.
. Turnover : fl. 300 million per year
. Gross margin : on average 35%
. Stockholding : on average fl. 50 million at branches and DIY's
and fl. 10 million at the central store (at
selling price).
. Personnel : + 1.000
Other
- Sales generally take place at catalogue prices (laid down by Head
Office) with fixed discounts allowed per article. Branches may
depart from these prices allowing a higher discount if necessary. In
practice the higher discount is granted on about 20% of sales. DIY's
are not
allowed to grant any discount except with special permission from
the branch office.
- Buying takes place centrally (stock held at the central store) or at
the branch level. DIY's have no purchase authority. In this way
Distrimax seeks to make optimal use of bulk discounts while minimising
lead times.
Organisation
Head Office
An organisation structure of the company is given in diagram 1. In
this diagram it is evident that under the directors there are four
functional groups:
- Commercial management. In this group fall the Head Office departments
of buying, sales (general sales policy), stores, distribution,
marketing and the showroom. Also under this group are the
decentralised selling operations of the branches and DIY's.
The task of the sales policy department includes:
- general sales policy
- support of buying policy decisions
- monitoring - through branch managers - the sales policies of
branches and associated DIY's.
- determination of "fixed" selling prices and discounts
- Financial Manager. Under the financial manager reside the following
departments:
- organisation
- business economics
- internal audit
- financial administration
- EDP management.
- General Support manager, providing general office services.
DIAGRAM 1
•
: ' • • '
COMMERCIAL
MANAGEMENT
• •
•
• •
HEAD
OFFICE
STORES
DISTRIBUTION
MARKETING
SHOWROOM
BUYING
SALES POLICY
1r
1 i
1
BRANCH
MANAGERS
BUYING
SALES
STORES
PACKING + DELIVERY
SHOWROOM
ADMINISTRATION +
CREDIT CONTROL
DIY'S '
FINANCIAL
MANAGER
ORGANISATION
+CONTROL
EDP
MANAGER
GENERAL
SUPPORT
ORGANISATION
BUSINESS
ECONOMICS
INTERNAL
AUDIT
FINANCIAL
ADMINISTRATION
SYSTEM PROGRAM.
•H CONTROL
DBA
NETWORK + OPS
SYSTEM
DEVELOPMENT
SECRETARIAL
PERSONNEL
HOUSEHOLD
SERVICES
ARCHIVES
A project group has been formed to oversee the introduction of point-of-
sale and office automation systems. This group consists of members
of operating departments (users), branches, the organisation
division, and the system development group (systems analysts and
programmers).
The head of the internal audit department is not a chartered accountant
and the activities of the department are limited. They include:
- stock taking at the central store, branches, and DIY's;
- confirmation of debtors balances;
- monitoring correct treatment of computerised reports;
- checking price and discount calculations.
The Organisation and Control group within the EDP department ensures
that necessary control features are built into computerised systems.
Automation and Computerisation
Management has sought to take advantage of the depressed economic
climate of the last four years, using this time to consolidate and
develop new systems so that the company is ready to "take-off" with
the shortly anticipated upturn.
These new systems have enabled:
- further independance of the branches, and
- a reduction in the number of personnel employed.
Full details of the new systems and the organisation structure developed
around them may be found in part 2.
-7-
2. ADMINISTRATIVE PROCEDURES AND TECHNOLOGY
2.1. ENVIRONMENT
2.1.1. Configuration
The equipment configuration at Head Office, branches and DIY's is
shown in diagrams 3 to 5. The network linking them all together is
shown in diagram 2. Each branch has its own minicomputer and each DIY
its own micro. Branches are primarily responsible the group's
processing.
2.1.2. General
Incoming documents are read electronically (image processing).
Originals are chronologically archived, with the electronic copies
available for use by employees.
All liabilities are settled through the Dutch national automatic
payment network in which all banks participate.
The central systems development function plays an important role in
the organisation and control of the data communications network, the
back-up and recovery facilities and the development of applications
for use by the branches.
Users of the system have a high-level programming (query) language
available to them. This language is used for ad hoc enquiries and
simple stand-alone applications.
DIAGRAM 2
COMMUNICATIONS NETWORK-DISTRIMAX
SUPPLIERS
CUSTOMERS
HEAD
OFFICE
BRANCH
NATIONAL
PAYMENT
NETWORK
^
DIAL-UP
VIDITEL
ki DIAL-UP
BRANCHES
2-12
CUSTOMERS
DIAGRAMS
HEAD OFFICE COMPUTER CONFIGURATION
DOCUMENT
READERS/
PRINTERS
BARCODE
READERS/
PRINTERS
DISKETTES
DISK
UNITS
9200 MB
DAISY/MATRIX
PRINTERS WORKSTATIONS
LOCAL
NETWORK'
WORD PROCESSING
VOICE RECOGNITION
DATA CAPTURE
IMAGE PROCESSING
VS200
8096 K
( TAPE \
1 UNITS j
PRINTERS
MODEMS
/ LEASED
/ LINES
BRANCHES ( NATIONAL
I, PAYMENT NETWORK
The Local Network consists of, on average, one workstation for
every two employees — with the necessary peripheral devices.
DIAGRAM 4
BRANCH COMPUTER CONFIGURATION
BARCODE
READERS/
PRINTERS
DISKETTES
DOCUMENT
READERS/
PRINTERS
DISK
UNITS
2800 MB
DAISY/MATRIX
PRINTERS
LOCAL
NETWORK
VSIOO
2048 K
MODEMS
WORKSTATIONS
WORD PROCESSING
VOICE RECOGNITION
DATA CAPTURE
IMAGE PROCESSING
PRINTERS
c OTHER BRANCHES
(DIAL-UP)
(
NATIONAL PAYMENT
NETWORK )
DIAGRAMS
DIY COMPUTER CONFIGURATION
POINT-OF-SALE
TERMINAL
MATRIX
PRINTER
LOCAL
NETWORK
DISK
UNITS
100 MB
WORKSTATION
WORD PROCESSING
VOICE RECOGNITION
DATA CAPTURE
-12-
2.1.3. EDP Department structure
The structure of this Head Office department is shown in diagram 1.
The head of the department reports directly to the board of
directors. The number of people in this department is broken downas
follows:
Head of department 1
Organisation and Control 3
Systems programming 3
DBA (including data management) 3
Network and operations 3
System development and programming 19
32
In addition, each branch has a branch appointed system administrator
and assistant system administrator. Under normal circumstances, their
workconsists of initiating start-up and day-end procedures. Day-end
procedures consist of a number of batch programs which must be run
daily in a predetermined sequence. Detailed operators manuals are
available to the administrators.
For more complex ODP * applications, the system administrators and
assistant systems administrators are available to assist the users.
They also all have other tasks within the branch - primarily of an
administrative nature.
2.1.4. EDP department procedures
I. System development.
While normal project development procedures are followed, the
emphasis is, to a large extent, on functional design.
This is realised through use of "proto-typing" wherever possible
and the ODP package is used for this purpose. Final responsibility
for technical aspects of projects is not firmly laid down because
the following departments may all be involved: the DBA group, the
Network group, and the Systems Development group.
Final responsibility for taking a project into production
(acceptance) rests with the users.
* ODP is a specific user friendly programming language of the query
type.
-13-
2. Program control.
At Head Office, production and test programs are adequately se-perated.
Hand-over procedures to the production library are laid
down in detail. The various EVA-source versions of programs are
kept under CDD/DS the data dictionary/directory system. After
compilation and link-editing the object versions only are sent
via the data communications network to the branches.
These object modules are copied into a password protected dataset -
the production program library. No EVA compilers are available on
the branch or DIY machines. Programs in the production library can
be run by users submitting the correct password. These passwords
allow access to the system (log-on) and to a limited applications
menu (see 5 below - security profiles).
3. Data base
The data bases at each site are managed by a relational data base
management system. The data bases are linked via the company's
communications network. A distribution function of the system
caters for the co-ordination of multi-site transactions. The system
is therefore known as the RDDBMS (relational distributed data base
management system).
Data redundancy occurs for several reasons:
1. The relational nature of the data base causes propagation of
keys;
2. Some data base files exist at multipile sites e.g. the product
information file;
3. Some data base files are horizontally distributed i.e. some
elements concerning the same occurrence of data are maintained
at different sites, e.g. a debtor's total balance is also stored
at Head Office. (The key in this case is always the debtor's
number);
4. Some data base files are vertically distributed, i.e. parts of
the total file are stored at different sites. (Most debtors and
creditors do business with only one branch. Where this is not
the case, performance and availability requirements may cause
multipile site storage).
-14-
For obvious reasons this redundancy causes co-ordination problems.
Meta data elements, including the node(s) where data is maintained,
are therefore all stored at the Head Office central site. The
software for organising, accessing and controlling this meta data
base is known as the CDD/DS (central data dictionary/directory
system).
This software is used by the data base administration, network
control and system development groups. It is not an active system
(due also to the communication problem).
4. ODP
ODP is a query language placing end-users in a position whereby
they are able to do their own programming. It has several built-in
functions designed for ease of use.
5. Security profiles
Security profiles consist of defined combinations of transactions,
terminals and users. On receiving a valid password from a user the
system reacts with an appropriate menu for that particular user and
terminal. Security profiles are stored at branch and DIY level in a
password protected dataset. They are not centered up at Head Office
as this would render the data processing facility entirely
dependant on the continuous availability of the physical network.
Access to security profiles is restricted to the system controller
and assistant system controller at each branch.
Certain users also have the ability to initiate (indirect) transactions
at other branches.
-15-
2.1.5. Communication Network
1. Branch minicomputers are linked by means of leased lines to the
Head Office computer.
2. Subsidiary communication between branches and DIY micros and
between branches and other branches takes place via dial-up lines.
3. Communications to the national payments system takes place via Data
Network 1 (DN-1)
4 Local communication is via a local area network connecting all
peripherals to the central computer.
5. The whole network allows transmission of text, data, voice and
images to every workstation.
6. Network organisation and control takes place at Head Office.
Network functions
1. Provision of structured data through automatic applications to the
various phases of the business process.
2. Document control
An electronic archive is maintained allowing user access through
defined codes and keywords (see also 2.2.1. and 9 below - image
processing).
3. Data management
Data management allows existing data to be accessed, searched,
selected, sorted and printed through use of the ODP query
language.
4. Electronic notebook
Ideas, remarks and messages can be stored in an electronic notebook
and speedily retrieved later.
5. Time control and planning
This function contains an electronic diary and reminder and a
daily "to do" list
6. Electronic message service
This function ensures that documents, memo's, telephone messages
and messages read by the document readers can be stored and sent
to all workstations within the network.
7. Decision support systems
Under this heading fall analysis functions, electronic worksheets
etc.
-16-
8. Image Processing
Any desired document can be read and stored in digital format by
the document readers.
To obtain hardcopy again later, a document printer is necessary.
2.1.6. Restart, recovery and back-up facilities
Every computer in the network maintains a log allowing for a "warm"
restart through use of normal back-out procedures. Should such a
restart fail, central systems development can be called in to assist
with Diagnostic Aid - a "help" utility allowing remote correction of
errors.
Total system dumps are taken daily in the event of a "cold" restart
being necessary.
With the exception of the Head Office machine, there are several of
each type of computer in the network so that hardware back-up is
always available. The EDP manager considers that full back-up requires
the following (all of which are possible or available):
- availability of all components;
- availability of external and local networks;
- availability of program libraries;
- the possibility of connecting branch and DIY peripherals to other
computers with lines of sufficient capacity;
- sufficient reserve capacity on other computers.
2.2. Centralised vs. Decentralised processing
2.2.1. Image processing
Electronic copies of documents in digital format can be made at Head
Office and at all branches.
Incoming documents are given a reference number and identification
key (initials, department) by the post room and are then archived
chronologically.
-17-
All further document handling takes place electronically.
Messages and documents also arrive electronically from outside the
office. These are referenced and identified by the post room, as
far as is necessary, though this can be done by the sender.
If necessary a document reader/printer can be used to make hard
copy of documents.
N.B. Image processing and data capture are not integrated. Thus
certain documents submitted for image processing have to be
resubmitted at a later stage for data capture.
Buying
Goods held at the central store are bought by the central buying department
at Head Office. Goods held at branches (basically the faster
moving items) and goods sold by the DIY's are bought by the branch
buying departments.
Orders are placed by whichever method suits the supplier best:
- direct orders per telephone, or
- directly to the suppliers own computer, or
- via a Viditel gateway, or
- on manually prepared order forms.
In all cases, however, electronic copies of the orders are kept.
Goods receiving
Bar-code stickers are generated as a by-product of the goods receiving
process for all incoming goods. These may be electronically read for:
- stock-take purposes
- make-up of goods received and delivery notes
- point-of-sale recording at DIY's.
The store or DIY first receiving the goods ensures that the bar-code
stickers are generated and attached.
Goods may be received (and the receipt made known to the system) at
any of the stores and at the DIY's since some suppliers deliver
direct to the DIY's. At the Breda and Rotterdam branches this is done
through a voice recognition system.
For deliveries from third parties the system attempts to match (see
2.2.7.) the order, the goods received note and the invoice. (Invoices
are received electronically or as conventional documents. In either
case further handling is done with electronic copies).
-18-
For inter-branch transfers a delivery note is recorded at the sending
branch. If no corresponding goods receipt is generated by the following
day, the system highlights this fact for the internal audit department.
Sales
Sales take place at branches (wholesale) and at the DIY shops (retail).
DIY's have point-of-sale terminals and bar-code readers. The
system reads the product number from the bar-code and retrieves the
price from a table maintained on the DIY microcomputer.
Some customers use electronic payment cards. Payment with these cards
is guaranteed up to certain amounts. Each card contains a chip on
which transaction amounts and remaining credit is recorded. *Stock
levels are immediately adjusted on the computer.
Sales by branches may be initiated in four ways:
- Large customers place orders electronically. To this end they
have direct access to Distrimax's computer.
These customers know the Distrimax product numbers.
After approval by sales, stores picking orders are automatically
generated. Stores personnel deliver the order to distribution who
do the delivery and prepare delivery notes (using bar-code readers)
The system then matches picking orders with delivery notes.
- Medium-sized customers order through Viditel gateways. After sales
has checked stock levels, approval is given and further processing
takes place as described above.
- Smaller customers place manual orders. Sales then prepares an order
form which is electronically copied and processed in the usual
manner.
- Cash sales (incidental and not to individuals).
Processed in the usual manner.
* Cards are made available by the customers bank and payment is
effected directly by the bank via the national payment network.
-19-
Prior to delivery, the credit limit of the particular debtor is
cheeked by reference to the Head Office computer where total balances
and limits are maintained for each customer.
In certain instances delivery of goods may be initiated by one branch
out of the stock of another branch. Where the quantities involved are
relatively small, this may be done without notifying the branch
concerned. Because this means that delivery is effected from one
branch and invoicing and receipt of payment from another, dally
financial administration includes routines to automatically generate
journal entries in the inter-branch current accounts.
This is indeed also the system where goods are delivered from central
stores to the branch stores.
2.2.5. Stocks
Stocks are kept up-to-date on a real-time system for both incoming
and outgoing goods and are valued at selling price. Product numbers
are Issued in the buying department.
Daily, each branch and DIY computer checks re-order levels of all
line items. Where re-order is necessary:
- re-order aidvices are generated for goods to be delivered directly
by suppliers. These must be authorised by the branch buyer.
Further processing takes place as in 2.2. above. Or,
- an inter-branch order is generated for central stores.In a second
processing routine, at Head Office, central stores itemsare checked
and re-ordered in the same way.Stock-taking is done at all stores and
DIY's using a portable bar- code reader and diskette-writer.
2.2.6. Debtors
Debtors Administration is fed automatically from the branch and
DIY sales records. At branch level full details of debtors are carried
while at Head Office a stripped-down file, containing only balances
is maintained.
Receipts from debtors come in via the national pajnnents network and
are automatically credited to the debtors account. Where this is not
possible (no matching on debtor number, invoice number and amount) the
receipt is credited to a suspense account.
Periodically the bank sends hardcopy confirmation of all cash received
on behalf of Distrimax.
Debtors numbers are issued at Head Office.
-20-
2.2.7. Creditors
The first process to take place (either at branch level or at Head
Office) against the electronic copy of the creditor invoice is the
approval for booking the charge (done through an automated invoice
register).
From time to time, and where necessary, product numbers are captured
on to the invoices. The system (see also 2.2.3) matches invoices,
goods received notes, and purchase orders. Matching takes place on
creditor, number of articles and product numbers.
On succesful matching the system generates a payment date and, using
this date, produces, twice per week, a "suggested payment" report.
Payments must then be authorised before they will be released into the
national payments network.
Creditor numbers are issued by the buying office concerned.
Liabilities in respect of outstanding orders are stored seperately.
Through the network Head Office receives, on a regular basis,
balances and totals of all branch creditors.
2.2.8. Financial Administration
Financial administration takes place on a decentralised basis at
DIY's, branches, and at Head Office.
Consolidation of all bookkeeping takes place at Head Office.
Apart from memo and adjusting journal entires processed at Head
Office, the financial systems receive automatically generated entires
on a daily basis, from the following subsystems:
- buying - debtors
- goods receiving - creditors
- sales - cash receipts
- stocks - cash payments
- deliveries - inter-branch accounting
10 September 1984
MËÜERLANDS INSTITUUT
WGISTEÏ^ACCOUNTANTS
BtBLIOTHEEK
(A
(D
•O
JD
3
er
(D
00
Critical aspects of internal control and the role of
the (EDP-)auditor in the development af a large
scale complex system (James H. David).
Impact of computerization on audit risk analysis
(Eric Guldentops).
Security and audit of operating systems (including
manufacturers' utilities) (Herman Roos).
10 September 1984
The EDP Auditor's Role
in Development of Distributed Systems
By James H. David
A Paper to be Presented at the
International Symposium on Auditing in an
Advanced Complex Computer Environment
Amsterdam, The Netherlands
September 1984
Page 2
such environments include: Audit and Control Considerations
in a Minicomputer or Small Business Computer Environment
(AICPA - 1981); Audit and Control Considerations in an
On-line Environment (AICPA - 1983) and Report of the Joint
Data Base Task Force (Piiblished by the AICPA on behalf of
itself, The Canadian Institute of Chartered Accounts and the
Institute of Internal Auditors - 1983).
DISTRIBUTED DATA PROCESSING
Distributed data processing ("DDP") is a term that is often
used and is being used with increasing frequency. It,
however, means different things to different people. Some
working definitions of DDP are overly broad. For instance
in Information Processing (Science Research Assoicates, Inc.
Third Edition - 1980), Marilyn Bohl defines DDP as:
"The dispersion and use of the processing power, or
intelligence, within an EDP system among
geographically separated locations"
Utilizing such a definition, however, recognizes that DDP
EDP AUDITOR'S ROLE IN DEVELOPMENT OF DISTRIBUTED SYSTEMS
A paper to be Presented at the
International Symposium on Auditing
in an Advanced Complex Computerized Environment
Amsterdam, The Netherlands
September 1984
The purpose of this paper is to discuss the critical aspects
of internal control in a large scale complex distributed
data processing system and the role of the EDP auditor in
the development of such a system. The paper is intended for
auditors with a basic understanding of EDP accounting
controls, EDP concepts and auditing in computerized
environments. As is discussed later, many of the auditor's
concerns, as well as the internal accounting control and
operating procedures responsive to those concerns, are
similar to those found in stand-alone on-line systems,
systems employing databases and minicomputer and small
business computer systems. This paper assumes a familiarity
with these environments.
Reference sources that deal with the auditor's concerns in
Page 3
can exist without the use of data communications. For
example, a factory that processes its local payroll on a
small computer and sends the output tape by courier to
another location to prepare payroll checks could be
considered a form of DDP. Such an all encompassing
definition therefore will not be used in this paper.
Rather than attempting to define distributed data processing
for purposes of this paper, DDP will be assumed to have the
following characteristics:
o The DDP network will be hierarchical with a host
computer (level I), minicomputers (level II)
microcomputers and intelligent terminals (level
III) and data entry terminals (level IV).
o Certain applications will continue to be
centralized on the host computer while other
applications will be distributed to levels in the
hierarchy.
o Data files (or portions thereof) will be locally
stored at remote locations.
Page 4
o Each level of processing capability will be able to
access data at other locations through a
telecommunications network.
o End users of the system will have the ability to
use high-level languages to manipulate data as
required for their individual needs.
The assumed characteristics of a distributed data processing
system are deliberately broad. The strategy selected by an
enterprise (e.g., degree of decentralized programming
permitted, data storage strategy) in achieving these
characteristics can, as will be seen, significantly impact
the concerns of an auditor.
INCREASING IMPORTANCE OF DDP
While certain types of distributed data processing (e.g.,
RJE) have been available for some time, there is currently
an increasing trend to develop DDP systems with some, or
Page 5
all, of the characteristics outlined above. There are a
variety of reasons companies give for distributing their
data processing capabilities. The increased processing
power of, and mass storage available for minicomputers and
microcomputers, as well as significant reductions in costs
of such machines is certainly a contributing factor.
Another reason often given is that DDP provides at least a
partial answer to the increasingly critical concerns related
to a company's ability to continue to operate if a disaster
were to occur at its centralized site. The real motivation,
however, appears to be a need to distribute the computer
power and data storage capabilities to a level that will
provide operating management the timely and accurate
information required for meaningful planning.
The decision to distribute data processing within an
organization is not one to be made lightly. The potential
benefits and pitfalls should be well known to management
prior to embarking on such a course. The items to consider
in making a decision to adopt a DDP system are beyond the
scope of this paper. However, those auditors involved with
companies contemplating such a move should know that it is
Page 6
not a trivial decision and unless the implications of DDP
are understood by top management, the DDP architecture
adequately planned and designed, as well as continuously
managed, DDP is doomed to failure.
IMPACT OF DDP ON INTERNAL CONTROLS
There is a line of reasoning within the audit profession
that states that DDP does not bring any unique concerns to
the auditor nor does it require knowledge that an auditor
has not already acquired in a variety of environments. This
line of reasoning concludes that the auditor's concerns with
regard to DDP merely encompass concerns related to:
o On-line systems
o Database systems
o Minicomputer or Small Business Computers
While such reasoning certainly has more than a germ of truth
to it, such an approach is probably an over-simplification
of the problems an auditor finds when confronted with a DDP
system. The confluence of concerns related to mini/micros,
Page 7
database and on-line systems and the impact on their
interaction requires more than mere iteration of the
individual concerns for each related element. Depending
upon the strategy adopted in implementing a DDP system, the
degree of exposure that exists in a stand-alone on-line,
database, or micro/minicomputer environment might be
significantly changed. For instance, the risk of
unauthorized program changes in a minicomputer environment
caused by poor segregation of duties might be mitigated in a
DDP system when the same minicomputer, operated by the same
number of people as in the stand-alone environment, has no
compiler and all programs are in object code downloaded from
a host mainframe. Alternatively, the level of password
protection available in a mainframe, on-line system may not
be supportable on a microcomputer at a distributed site and
thus increase the risk of unauthorized changes to data.
There are certain concerns that EDP auditors have that do
not relate to the traditional internal accounting control
system. Some are related to operational efficiency, such as
all levels of a DDP system having a standard
telecommunications line protocol; some to the ability of the
Page 8
company to recover from disasters (from losing a transaction
to losing the EDP center); and other to peripheral matters,
such as the availability of current documentation. While
these concerns are real and can be significantly impacted by
the adoption of a distributed data processing system, they
will not be dwelt with at depth in this paper. Rather, only
those concerns that are truly critical aspects in a system
of internal accounting control in a large scale distributed
system will be discussed.
CRITICAL ASPECTS OF INTERNAL CONTROL
The critical aspects of internal control in a distributed
data processing system relate to the ability of the DP
portion of the system to achieve the objective of completely
and accurately processing authorized transactions so as to
permit the preparation of accurate financial statements and
provide executives with information required to manage the
business. One view of the requirements of EDP controls
required to achieve this objective is to gain reasonable
assurance that:
Page 9
o Development of and changes to computer programs are
authorized, tested and approved ("Controls over
Programs").
o Access to data files is restricted to authorized
users and programs ("Controls over Access").
Gaining reasonable assurance that there are sufficient
procedures to achieve controls over programs and access is
dependent upon the assumption underlying all internal
control systems - that there is an appropriate segregation
of duties. There is an expectation that many of the level
III and IV locations in a hierarchal distributed data
processing system will not have sufficient personnel within
data processing to provide for an adequate segregation of
duties. The auditor's concerns in such a situation are
similar to those he would encounter in any minicomputer or
small business environment. However, these concerns can, in
certain circumstances, be mitigated by the strategies
adopted in developing the DDP system. For example, certain
critical functions of the system (e.g., access routines) may
be delivered to such locations in microcode on chips that
Page 10
would significantly decrease the ability of local personnel
to make unauthorized program changes. The ability of a DDP
system, if properly designed, to mitigate some of the
auditor's concerns with respect to an inappropirate
segregation of duties is important for the auditor to keep
in mind during system development.
Control Over Programs
In establishing a DDP system, a company may approach
application development and program maintenance on (1) a
centralized basis or (2) a decentralized basis or (3) some
combination of both.
If a centralized approach to application development and
program maintenance is adopted, all programs would be
developed and maintained by a corporate data processing
staff. The corporate staff would likely have controls over
each stage of the system development life cycle ("SDLC")
similar to those used for applications maintained on the
host computer. Level II and below locations would only have
object code versions of production programs downloaded from
Page 11
the host mainframe and the remote locations would have
neither on-line programming capabilities nor access to a
compiler. If such an approach to application development
and maintenance is adopted for DDP system, the auditor's
special concern would be over synchronization of changes to
programs: (i.e., are all remote locations using the most
recent version of their application programs and when
program changes are required are they made at all
installations simultaneously?)
When application development and maintenance responsibility
is decentralized so each location at every level of the DDP
system develops and maintains its own programs, the auditor
becomes concerned with consistency among the various
locations. He is concerned that consistent development and
maintenance standards are in effect and are being enforced.
If each location can develop/maintain its own applications,
he is concerned that like applications (e.g., accounts
receivable) at different sites perform the same processing
and include the same controls, and how such consistency is
ensured.
Page 12
For installations adopting a decentralized approach to
program development and maintenance, some programs will be
developed on smaller computers. The auditor should be aware
that many mini and micro computers use interpretive
languages (e.g., BASIC) that are executed line-by-line. The
user of an application therefore has ready access to the
source code and can modify it while processing is taking
place without leaving any evidence of the change. Further,
in many distributed environments, the users are provided
with powerful utility programs and fourth-generation
languages to manipulate data to fill their needs. Without
appropriate access controls, there is a potential for using
such programming languages to change data.
If an approach to developing and maintaining programs is a
combination of centralized and decentralized strategies (the
most common occurrence), the auditor's concerns are likewise
a mixture of those he has in the two environments.
Controls Over Access
Controls over access to data (including program) files in a
Page 13
DDP system depend on control techniques that are similar to
those employed in other environments (e.g., DBMS features,
passwords, security packages, terminal sign-on procedures)
but for them to function appropriately, certain changes
might be required in a DDP system, such as possibly needing
a security officer at each remote location to maintain
password tables and investigate unauthorized access attempts
on a timely basis. The suitability of such techniques (and
indeed their availability) is significantly influenced by
the data storage strategy adopted. Three typical storage
strategies and the concern an auditor might have in addition
to his normal concerns over access to data are:
o Full file duplication which provides a copy of all
accessible records at each location. As any
location enters data that changes a record, the
change is transmitted to all other locations so the
data base can be brought back into synchronization.
The auditor has a special concern over the
procedures used to achieve concurrence of data at
all locations.
Page 14
Partitioned files are portions of the entire file
maintained on the host that is provided to each
location and contains only those records most
likely to be required by the location. Should the
location need other records, the transactions
requiring them will be passed through the
telecommunications network for processing to the
location with the required records. The auditor's
special concern in this situation is that the file
maintained on the host computer is equal to the sum
of the portions at the distributed locations.
Detailed/summary hierarchy strategy is a form of
file distribution which places all of the necessary
detail data at each remote location and maintains
aggregate data in summary only at the central site.
In such an instance, the auditor would be concerned
about synchronization of detail data between remote
sites (similar to full file duplication) and
agreement between the host's summary data and the
remote locations' detailed data.
Page 15
Distributed data processing systems may, depending upon
their design, create special concerns that are not present
in on-line systems restricted to a single location. These
concerns relate to the ability of computer devices of
non-authorized users to access the data files through
dail-up lines or through public telecommunications networks.
Special care must be taken to determine these access
abilities, provided to nodes within the company DDP system,
are not abused by outsiders.
Similarly/ each remote location within a network must be
provided the ability to check that transactions passed to it
by other nodes within the system network are authorized.
The auditor must be concerned that access rules for every
location include rules for transactions received through the
telecommunications system that are as comprehensive as
access rules for transactions generated at its own site.
ROLE OF THE EDP AUDITOR IN THE DEVELOPMENT OF A DDP SYSTEM
An EDP auditor should be involved in the development of a
distributive data processing system just as he would be for
Page 16
the development of any computerized application that could
impact financial data or other information required by
management. The auditor's traditional role in the
development of a computer system is to determine (1) that
the entity's policy and procedures related to the system
development life cycle are appropriately followed, (2) that
appropriate control objectives and procedures to achieve
them have been identified in the development process, (3)
that areas of special concern (e.g., back-up and recovery)
are addressed and (4) that the resulting system is
auditable.
During the development of a DDP system, the auditor must
assess the impact the alternative strategies selected by the
developers have on each of the four areas of his traditional
role. The type of impact of each alternative strategy on
the auditor's role regarding policies and procedures
relating to the SDLC, control procedures, and concerns such
as back-up and recovery, is too extensive to list in any
meaningful detail. Suffice it to say that the auditor
requires an in-depth understanding of the implications on
those roles of the strategy selected in areas such as types
Page 17
of hardware, system software selected, telecommunication
software, degree of decentralized programming permitted,
DBMS (if any), fourth-generation programming language
availability to users, and data storage methodology.
For instance, in considering procedures for back-up and
recovery the auditor should consider things such as (l)
certain hardware demands time-consuming procedures to back
up data files that may not be uniformly followed under time
pressures, (2) that in cases of machine or system failure
different copies of the data base may be in different stages
of updating and achieving resychronization may be a
particular problem and (3) the recovery strategy for a
remote site struck by a machine failure may require a
temporary dial-up access to another node that might by-pass
some of the access controls that had been available at the
location that suffered the failure.
The fourth area of the auditor's traditional role during
system development, determining the system is auditable, can
require significantly more effort in a DDP system. It is
not uncommon to find that the traditional hard copy audit
Page 18'
trail of all transactions has disappeared and that the trail
is in machine sensible data only and, further, is available
for only a relatively short period of time. This is often
caused because the users, having acquired the real time
updating and inquiry capabilities that they needed, no
longer have any requirement for the frequent detailed
print-outs of transactions and balances they previously
needed. These reports, which formed the backbone of the
audit trail, may be considered unnecessary, and too time
consuming and costly to print on the slower printing devices
available at remote locations, if their only function is for
the auditor's occasional use.
A further complication for the auditor is that the machine
readable information that was once resident in a flat file
at a centralized location may no longer be available.
Depending upon the strategy selected for data storage, the
auditor may find portions of the data stored at distributed
locations and/or the data stored in a database which is not
accessible by his audit software.
Because of these and other factors, such as the
Page 19
disappearance of traditional input documents, the auditor
may have to consider using concurrent audit techniques to
achieve his audit objectives. While these techniques, such
as integrated test facilities (ITF), snapshot and system
control audit review file (SCARF), have been expounded upon
for years, most auditors have not used them as they found it
as effective and efficient to use less exotic means of
gaining audit satisfaction. In a DDP environment, however,
concurrent audit techniques may be more effective and more
cost efficient than traditional "after the fact" auditing.
The critical time for evaluating the usefulness of
concurrent audit techniques is during system design. Should
such techniques appear to be cost-effective, they can be
built into the system at significantly less cost during the
development of the DDP system than trying to retrofit them
into a functioning system at a later date.
As the number of locations in a DDP system could be great
and be geographically dispersed over significant distances,
the EDP auditor should attempt to develop a capability for
performing many of his procedures from a centralized
Page 20
location. Among other procedures he will be performing,
including visits to remote locations, the auditor at the
central site should be able to:
o Perform any terminal function as if he were at a
distributed location.
o Examine any data file no matter at which location
it is physically stored.
o Examine any location's log of such things as system
failures, line failures, attempted security
violations and corrections of erroneous data.
o Examine programs at remote sites, including an
ability to compare such programs to the authorized
version, if such programs were centrally developed.
o Select transactions from remote sites for testing
as they occur if he elects to utilize concurrent
audit techniques.
JAMES H. DAVID, C.P.A.
Partner
Ernst & Whinney
New York, New York
James H. David is the partner responsible for computer auditing and
statistical sampling for the New York Region of Ernst & Whinney.
Mr. David is currently Chairman of the EDP Auditing Standards Subcommittee
of the American Institute of CPA's Auditing Standards Board. Jim has
served as Chairman of the AICPA task force that developed the new Audit
Guide on Audits of Service-Center-Produced Records and as Chairman of the
task forces that produced the Computer Services Guidelines on Audit
Approaches for a Computerized Inventory System and Audit and Control
Considerations in an On-Line Environment. He also served as Chairman of
the Auditing Standards Board's task force that wrote SAS No. 44,
Special-Purpose Reports for Use by Other Auditors ("Single Auditor
Approach"). Jim has also been a member of the NYS Society of CPA's
Committee on Data Processing.
In his specialized field of computer control and security, Jim has lectured
and conducted courses for various professional and business organizations
including the Institute of Internal Auditors, the Foundation for Accounting
Education, the Association for Government Accountants (formerly the Federal
Government Accountants Association), the A.I.C.P.A. and the Bank
Administration Institute.
Mr. David, who is a graduate of Baruch College of the University of the
City of New York, is also a Certified Information Systems Auditor.
Risk Analysis in a Computer Environment
Erik Guldentops
Audit Manager
S.W.I.F.T. sc, Brussels
International Symposium on Auditing in an Advanced
Complex Computerised Environment, Nederlands Instituut
van Registeraccountants, Amsterdam, 10-12 September 1984.
Scope - Objectives - Alternatives - Definitions
The first thing that should be kept in mind when addressing the
subject of Risk Analysis is that it is really a part of a larger
area known as Risk Management.
Risk Management itself consists of
- risk assessment,
- safeguard evaluation and selection, and
- implementation of safeguards.
The term Risk Analysis, as used nowadays in practice as well as in
the literature, has two meanings. In the first sense, it comprises
the first two areas of risk management, i.e. risk assessment and
safeguard selection. The second meaning refers to the techniques
(mostly quantitative) of measuring risks, and therefore comprises only
the first part of Risk Management. If we stick to the second meaning of
Risk Analysis, and if we limit our field of interest to data processing,
we can define Risk Analysis as a method used to quantify the impact of
potential threats on organisations supported by data processing through the
evaluation of the possible damage that results from unfavourable events and
the evaluation of the probability of such an event occuring.
Why do we perform Risk Analysis ?
Basically to quantify exposures for management so they can decide to reduce
exposure to an acceptable level, and to gain "reasonable assurance" that
alternatives for security are both feasible and economical.
2.
The requirements for security alternatives to be feasible and economical are
best represented by the following graph :
COST
COST OF
MEASURES
DEGREE OF
SECURITY
C0ST/8ENEFIT OF CONTROLS
The definition and objective of risk analysis is fairly common
knowledge, however, what is often not considered - although the
ultimate objective remains - is that the risk analysis exercise
could be intiated because of 3 different reasons :
(1) as a comparison to state-of-the-art or to
the rest of the industry
(2) for the evaluation of an operational decision
(3) as a form of continuous risk monitoring.
Each of these reasons creates a different environment for the risk
analysis exercise, and needs different methods. To only mention one
distinct difference : the evaluation of an operational decision will
require a much more quantitative approach whereas the other two
reasons, depending on available data, will cause the exercise to be
more of a qualitative affair.
3.
But what are the alternatives to Risk Analysis ?
Relative to the area of computer security we could first implement
controls subjectively based on professional judgement. Or we could do
it in a more structured manner, and follow the traditional audit
approach which consists of reviewing systems on a rotational basis and
implementing controls as deficiencies are found.
However both of these alternatives fail to provide
(1) quantification of dependence on computers
(2) proven consistent techniques for comparison between
systems
(3) quantification and ranking of exposures to support
argumentation for the scope of internal audit or security
reviews
(4) a systematic approach to measure cost/benefit of controls.
A good overview of the three methodologies - risk analysis, EDP
audit and security review - has been published in a recent NBS publication
dealing with methods for measuring the level of computer security.
Difference in Purpose
4.
Risk Analysis EDP Audit Security Review
Budgetary, optimally
allocate resources
1. Assess controls
policy compliance
1. Assess defenses
Emphasize threats, 2. Emphasize controls
assets, attack frequencies
2 . Emphasize controls
Control existence and
general effectiveness
3. Proper control functioning
against
anticipated threats/
attacks
3. Unanticipated ways
to subvert or bypas*
controls ^
Balanced emphasis on
exposures
4. Often emphasize modification
(ensuring
systems "tell the
truth")
4. Usually emphasize
disclosure (ensuring
systems "protect
secrets")
Controls considered
last
5. Controls considered
early
5. Controls considered
early
Installat ion-oriented 6. Primarily application
but also system
oriented
6. All inclusive
Usually quantitative 7. Qualitative 7. Qualitative
Mutually exclusive
exposures
8. Overlapping exposures 8. Often partial
exposures
Balanced evaluation 9. Focus on key areas 9. Focus on key areas
5.
Before getting into the actual subject matter, it could be beneficial
to repeat some definitions of terms we will use :
A THREAT is a single identified factor able to do harm or damage
AN EXPOSURE is the condition of being exposed to the consequence
of threats due to vulnerabilities ( a concern).
SECURITY is the absence of circumstances susceptible to causing
degradation and/or loss of assets, and assets also include the
service provided by a computer system and its associated concerns
for availability, integrity and privacy.
AN ACCEPTABLE LEVEL OF SECURITY is the characteristic of a system
which has a security programme in which each known event represents
an acceptable risk.
A RISK is the extent of the state of the system or the extent of an
event characterised by
0 its probability of occurence
0 its consequential loss.
AN ACCEPTABLE RISK is the extent of the state of the system or the
extent of an event which has been limited to a certain proportion of
its original extent by an explicit decision. This decision represents a
control and can be either procedural, organisational or built in the
software or hardware.
General Approach and Issues
STEPS NEEDS
1 Define conditions of exposure Sources of evidence
Intuitive
. traditional knowledge
. common sense
2. Identify adverse effect .analogy
Systematic
. review of exposure
. statistical analysis
3. Relate exposure/effect . testing/experiments
4. Expressing Risk Tools
. probability theory
. relationships
5. Judging Risk Criteria
. acceptability of risk
. reasonable measures
. judgement considerations
7 .
The general lines of investigation in a risk analysis exercise
would s t a r t off with the d e f i n i t i o n of the conditions of exposure
a system or an organisation i s experiencing. The basic questions
to ask are : "who i s exposed to what, in what way, for how
long and at what frequency ?"
The second step would consist of identifying the adverse effects
of the exposure, i . e . "what is the threat the a s s o c i a t e d damage ? " . This
i s followed immedia.tely by r e l a t i n g exposure and effect by asking
"How much adverse effect results from how much exposure ? "
Some graphs may better i l l u s t r a t e this relationship
Ef et Ef
STANDARD EXPOSURE/EFFECT FUNCTION THRESHOLD E FFECT EXTRAPOLATION IN LOWER REGIONS
The typical graph shows that increase of effect is low for low
degrees of exposure. The. increase accelerates after a certain degree
has been reached, and tapers down again in the end because there is
a limit to the effect. The second graph shows that for certain types
of exposures there is a threshold effect, meaning that at a certain
degree of exposure the effect becomes intolerable.
8.
This would be the case when we are considering business continuation
and survival. Finally it is evident that in lower regions of
exposure there is very little data available, and in absence of data,
very little experience or intuitive feel for possible effect, creating
the need to extrapolate the relation.
There are two sources of evidence that are at our disposal in these first
three steps : intuitive or systematic . Traditional
knowledge, common sense and analogy to known cases are clearly intuitive in
nature. The quality of input from these sources will depend on the experience
of the risk analysis team.
The first of the systematic sources of evidence is review "
of inadvertent exposure, the nature of which exposure depends
of course on the nature of the activity that is being analysed.
In most cases many indicators which show what inadvertent exposure systems
are experiencing, are known or can be easily found.
A review of inadvertent exposure, in relation to risk analysis of
computer systems, is simply something we know as a computer security
review.
The second systematic manner of collecting evidence is statistical
analysis. However historic data is in most cases scarce because of the
low frequency of occurence of comparable events. Even
more, in case of errors, omissions and fraud, the information will not ^
always be disclosed. Additionally when data is available it may be of a
local nature. Further there is the difficulty of proving the causal
relationship between the data on the exposure and the effect experienced.
And finally there is the significance of the sample findings, i.e. how
significant are the few cases on which we have data as compared to the set
belong to.
9.
The significance of the sample is also crucial if sampling is choosen as
the testing or experimenting approach. For testing in general, the degree
to which test conditions approximate real life conditions, is equally
important. I may be painting a gloomy picture for what concerns the use of
statistical data. Nevertheless, let it be known that if data is available and
we keep aware of its shortcomings, it is the best place to start, after which
we can build onto it from our intuitive sources.
For what concerns expressing risk, it should be remembered that risk is a
measure based on probability, output from a scientific effort, which becomes
input to a personal, social or business decision-making process.
The expression of risk is a compound measure describing both probability
and severity. It is often expressed in asset value per year but the
magnititude of these figures are often difficult to grasp, therefore
comparison to other risks will help.
And this brings us to the last step of our general approach : the judging
of risk. As we saw in the beginning, through the definitions, the aim is
to achieve an acceptable level of security. But acceptable by whom ?
According to what standards ? Acceptable is too pervasive a term to
use, the term reasonable is much more practical.
For example, a preventable risk is not reasonable
(a) when users do not know that it exists ; or
(b) when, though aware, users are unable to
estimate its frequency and severity; or
(c) when users do not know how to cope with it
and hence are likely to incur harm
unnecessarily; or
(d) when risk is unnecessary in that it could be
solved or eliminated at a cost in money or
performance that users would willingly pay if they knew
the facts and were given the choice.
10.
Again, what means reasonable measures ? This is difficult to defin
but we could be guided by comparing on the one hand, prevailing
professional practice and the best available practice; and on the
other hand, by comparing the highest practicable protection and
the lowest practicable exposure. We should also keep in mind the
threshold principle, irrespective of the fact it is difficult to
establish thresholds above which effect is not acceptable.
To finish off this theoritical section I would like to give
some considerations for risk judgement :
(1) difference in judgement on the same risk or control
if there are alternatives
(2) fatalistic acceptance in absence of alternatives
(3) danger of over-reaction to certain risks, i.e. need
to keep a balanced view
(4) awareness that a consensus on risk levels and control
needs can rarely be reached
(5) the danger of not judging high enough those risks that
could have irreversible effects.
1 1 .
Concluding the first part and as an introduction to the second
p a r t , we could look at a schematic of a general risk analysis
approach :
Management
support
for
team approach with
mix of expertise
to
assess costs 4 benefits of implementing
particular controls
resu Its 4"
by
identifying determining
. asset replacement costs . exposure of risk
. threats . impact
. existing security measures . frequency
using •
. checklist
. experience
. consistent forms
using I
. available data
. recursive scoring techniques
. intuitive feel
\ /
alternatives of cost/beneficial
controls
The 7 basic steps of a Risk Analysis Exercise
12.
Irrespective of the situation, the objective or the methodology
selected, the following major steps have to be performed :
1. Identify the assets
2. Evaluate the assets
3. Identify concerns/threats./vulnerabilities
4. Estimate impact upon occurrence
5. Estimate frequency of occurrence
6. Calculate individual and total risk
7. Select and evaluate countermeasures.
We will now go through each of the steps and consider the
different possibilities, concerns and issues that exist.
The guiding line used throughout the review of the different
steps will be the most propagated risk analysis method, first
proposed by COURTNEY while he was still at IBM and
consequently used as basis of 2 National Bureau of Standards
Publications i.e. FIPS publ. 31 and FIPS publ. 65. Let's refer
to it as the COURTNEY METHOD.
15.
3.1. STEP 1 : Identify assets
The analysis of assets requires that they be subdivided into
areas in order to put some structure into the effort. In step 3
we will pair these assets to concerns such that the subdivision
in areas should also be applicable to the concerns.
One possible list of areas is
areas example
- technical software, hardware
- physical environment, building
- personnel computing personnel, engineering staff
- administration documentation, procedures
If the goal of the exercise is to evaluate risks for a particular
system the choice of areas is not important, as long as there
is a structure to help guarantee completeness of the list of
identified assets.
However if the goal is to develop a standard Risk Analysis method for
use by different departments or organisations, some more time should
be spent on the structure.
On the other hand, many methodologies and even the original COURTNEY
METHOD, anticipate already at this stage, the fact that concerns and
>
threats will have to be structured at step 3 and therefore use the
structure of concerns and threats to organise assets.
The reason for this is that the original method was envisaged to assess
risks solely in an EDP environment where harm always manifests itself
as a loss of one or more of 3 conditions :
- data integrity
- data confidentiality
- EDP availability.
Because it is difficult to imagine every event which has an unfavourable
effect, the original method uses data files and applications as a list of
assets because there is a finite number of them.
14.
At this point it is worth noting that computer installations
are very often considered as one asset
In fact they consist of 2 distinct assets which have to be considered
separately. First there is the physical installation as an asset
and secondly there is its availability.
Before concluding this first step of the process we have to address
one of the many issues of risk analysis, it is the problem of
overlapping assets and concerns. If we split them into areas, certain
risks are bound to overlap, causing some risks to be counted more than once.
However there is no methodology that guarantees idenfication and
qualification of mutually exclusive risks. Nevertheless arriving at a
list of mutually exclusive risks in order to achieve an accurate
overview remains an objective of risk analysis. This can be achieved
to a certain degree because of the repetetive nature of every risk
analysis exercise and also because we will never achieve exact figures
but rather ranges or orders of magnitude such that counting smaller parts
of risks more than once will not greatly affect the outcome.
STEP 2 : Evaluate the assets
15.
In fact this step could be combined with step 4 were we will
estimate the impact of threats and vulnerabilities on the
assets. However I feel that the structure of the exercise would
greatly improve if assets are valued at this stage to make the
distinction between asset value and asset damage.
In case risk analysis is used as a standard method for evaluation,
it is evidently important that the asset values as well as the asset
lists are continuously updated, even though we will only use orders of
magnitude to express their value. Orders of magnitude are sufficiently
accurate for the purpose of risk analysis in most cases.
The orders of magnitude most commonly used are the powers of 10.
In case qualitative measures are needed to increase consistency in
the evaluation they could be added as in the following example.
VALUE RATING TABLE
Qualitative
Rate (i) Value Measure Abbreviation
1
2
3
4
5
6
7
8
10
100
1,000
10,000
100,000
1,000,000
10,000,000
100,000,000
Neglectable
Very low
Low
Medium to Low
Medium
Medium to High
High
Very High
In order to get consistency in evaluation, it is recommended to apply a
top-down approach to the asset list , i.e. rate groups of assets
as a total first and then rate the individual assets. This to
make sure that no individual asset is more valuable than the group
it belongs to, and also to make sure that the group is properly valued
as a whole.
N
VL
L
ML
M
MH
H
VH
STEP 3 : Identify concerns/threats/vulnerabilities
As mentioned before, concerns should be organised in the different
groups as recognized in step 1 when the assets were identified.
Threats in this context can be seen as a subdivision of concerns
providing again for structure in the identification process to
guarantee as much completeness as possible.
Very few of the Courtney-based methodologies consider vulnerabilities,
however they are a great help in the next steps when assessing
impact and frequency of the threats.
Consider the following example :
CONCERN THREAT LIST VULNERABILITY LIST
Fire computer room fire
building fire
external fire
poor fire detection
poor fire suppression
large store of combustible
material
0/S Software alteration of
software
0/S flaws
no system access control
poor maintenance procedure
In fact the vulnerability list could be expanded into a vulnerability/
strength list because strengths will also influence subsequent ratings
of impact and frequency.
V/hen making such lists, there are some concerns that could be
forgotten :
- decision basis
- opportunities
- errors and omissions
- time
17.
Wrong or absent data may result in wrong decisions or no decision at
all. Such situation could also lead to lost opportunities which are
also concerns when evaluating risks. Errors and omissions could be
considered as a threat for every concern, or as a vulnerability for
each of the threats. Time could be seen as an asset or lost time as
a concern.
Another issue at this and subsequent steps is the difficulty
of judging personnel integrity, and it is therefore recommended to
leave it out as a contributing factor (vulnerability, strength) in a
risk analysis.
One guiding factor that could help to recognize as many threats as
possible, especially in information processing, is the transaction flow.
Following a transaction from initiation to completion, and considering
each action of transfer, process or storage will help identify
possible unfavourable events.
18.
STEP 4 ; Estimate impact at occurence
A good denominator for qualifying impact or possible damage is
monetary value. Nevertheless it is important to keep in mind that
we should not over-rely on monterary values which have been determined
based on personal subjective reasoning. It is therefore better to
stick to the scores or the corresponding qualitative terms. There are
two ways to record the subjectivity of the impact estimate of
a certain concern against a particular asset.
The first one is to give a precision rating which indicates how
accurate the evaluator feels the estimate to be. The second consists
of giving a range of impact (high and low) using the scores on the
value table.
The first approach is recommended for estimating frequency of occurence,
the second for estimating impact.
What we will do in this step then, is to give for every asset/concern
pair, a high and low impact score, guided by the associated threat and
vulnerability lists. The closer the high/low scores are, the more accurate
they are according to the evaluator.
The high/low score is in itself a good evaluation approach. We can simply
approach the scoring table from both ends starting with the highest and
the lowest score and bringing then closer until one is satisfied. Apart
from that, impact evaluation is a matter of common sense and experience.
There exist more detailed damage rating approaches such as the one proposed
is IBM's Data Security Design Handbook, to which I refer those
who want more details on the subject.
A last word on impact evaluation : it often occurs that a team in this
type of exercise gets bogged down in discussions on why or why not undesi
rable events occur. Remember, they should concentrate on the potential
impact of undesirable events, not on why they occur.
STEP 5 : Estimate frequency of occurence
19.
We mentioned in the previous step that monetary value - while
being aware of the problem of overreliance - is a good denominator
for quantifying impact.
Since monetary and fiscal matters are organised on a yearly basis, a
year is the best time period to express expected frequency of occurence.
The 2 most frequently used tables for rating frequency in the COURTNEY-like
methods are as follows :
FREQUENCY TABLE
Alternative 1
FREQUENCY TABLE
Alternative 2
Rate (f) Meaning
1 once in 300 years
2 once in 30 years
3 once in 3 year
4 once in 4 months
5 once in 1 week
6 once in 1 day
7. once in 2 hours
8 once in 15 minutes
Rate (f) Meaning
0
1
2
3
4
5
6
7
almost never
every 1000 years
every 100 years
every 10 years
once a year
once a month
twice a week
, three times a day
Again orders of magnitude are used, the rates follow approximately, powers
of 10.
2 0 .
The position of our basis - the year - in the table w i l l determine the outlook
of the formula we w i l l use i n the next step were we calculate the r i s k.
The same approaches to estimation of frequency as mentioned for evaluation of
the impact in the previous step also apply here. There i s the method of
a p p r o a c h i n g t a b l e from b o t h ends, the a p p l i c a t i o n of common sense
and experience and the awareness of the degree of uncertainty.
The degree of uncertainty is best represented by the following
gragh :
DEGREE OF ^
UNCERTAINTY I
FREQUENCY
This graph indicates that uncertainty is high when frequency is low, i.e.
that the margin of error is larger. Whereas when the frequency is high,
the probability distribution becomes clear and risk can be estimated with
a smaller margin of error because the higher the frequency of occurence
the more data is available.
STEP 6 : Calculate individual and total risk
21.
The most commonly used denominator for expressing risk is ALE
(annual loss expectancy) which evidently is the product of
impact and frequency.
I ALE = IMPACT X FREQUENCY]
The two elements of the product have, in the previous steps, been
expressed on orders of magnitude, i.e. powers of 1Ü. The result of
the multiplication is therefore also expressed as an order of
magnitude.
Impact is simply expressed as 10 but frequency has to be expressed
in function of one year. If we go back to the two alternative frequency
tables, we can see that for the first table, the annual frequency can be
f-3 f-4
expressed as 10 and for the second table as 10 , because the
3
frequency rate has to be brought back. Considering that multiplying powers
of the same number, consist of adding the exponents, the two alternative
formulas are then :
ALE = .lO^"^^"^ or ALE r 10^"^^"^
3
We can either use the formula or simply work straight from the tables.
This has to be done for every asset/threat pair and then summed together
which will give the annual loss expectancy figure for the whole area we
are looking at.
However, as stated before,' the problem of overreliance on a pure monetary
value for the ALE remains. It is therefore, in most cases, better to use
an order of magnitude rating for the annual loss expectancy.
Based on the first frequency table we considered, a risk table could
be constructed as follows :
Score (r)
8
7
6
5
4
3
2
1
RISK TABLE
Monetary
Value
300jOÜO,OOÜ
30,000,000
3,000,000
300,000
30,000
3,000
300
30
Meanina
REAL
MAJOR
VERY SIGNIFICANT
SIGNIFICANT
VERY CONCERNING
CONCERNING
MODERATE
NEGLECTABLE
In the example selected, the risk score is a simple relation of the
impact and frequency scores : r = i + f - 4
The choice of magnitude must of course be relevant for the system
or organisation that is being evaluated. All the combinations of impact
and frequency can be looked at (because there will be more than 8) and
that range selected which is most appropriate.
23.
It is recommended to select an appropriate set of impact, frequency and
risk tables, and that, apart from the scores and rates, meaningfull "descriptions"
are assigned. These have to be used by the evaluators in a consistent
fashion and have to be clearly understood by management. And this really gets
us into the problem of qualitative versus quantitative measuring and reporting.
In my opinion a pure quantitative approach were the outcome is monetary value is
only recommended for the evaluation of operational decisions, and in particular
as cost-justification of controls. The other two objectives of risk analysis
{compaviicitéc éAe. state-of-the-art or to the rest of the industry, and as a means
for continuous risk monitoring) are really better off with a structured qualitative
approach. An example will illustrate however that qualitative and quantitative
are not far apart. We have already taken a step towards qualification by
using scores that represent orders of magnitude, but they are still numbers.
Numbers have abstract meanings and are treated by our left brain. Our right brain
is more occupied with creative, perception and practical matters.
Our right brain would therefore better understand a perception such as a
"very serious risk".
THREAT ASSETS FREQ IMP RISK RATING
Accidental Programs
erasure Data
Magnetic media
The example now illustrates that the figure 7 as a rate of risk is still fairly
abstract. Transposing the figures in a number of asterisks has a different impact,
it better sets off the risk in relation to others and is therefore more qualitative.
The difference is therefore not bigger than the difference between the
number 7 and seven asterisks.
On reporting of risk factors, I would just like to say that consistent forms
should be used, forms that support the structure of assets and threats that have
been choosen in the first steps of the exercise.
4
5
3
4
6
3
4
7
2
****
* * • *
**
***
If one follows the original COURTNEY method, and considers the compromise
of data integrity, data confidentiality and systems availability as threats
against the asset list consisting of systems and files ; the following could
be an example of a form :
FILE
DATA DATA SYSTEM VULNERABILITIES
INTEGRITY CONFIDENTIALITY AVAILABILITY & STRENGTS
(i) ALE (f) (i) ALE (f) |(i) ALE (f)
1 1
1
i 1
i 1 1
. STEP 7 : Selecting and evaluating safeguards
25.
Having a feel now for the level of risk we are exposed to, we can
start to select and implement safeguards inas cost-effective a manner
as possible.
.1 Procedural and Physical Safeguards
First of all we should look at procedural and physical safeguards.
Procedural controls, especially when used in combination with physical
barriers produce the highest degree of security at the lowest
cost.
Some examples are :
- screening of employees
- off site storage of back up data
- development of standards
- testing of contingency plans
.2 Deterrents for Computer Crime
Secondly we should keep in mind that statistics on computer crime have
shown that the best deterrent to white collar crime is the curtailment
of incentive, i.e. to limit the profit potential. The second best deterrent
is the fear of getting caught, which is achieved through the provision
of dual authority, segregation of duties and adequate logging and
audit trails.
.3 Accident Chain Analysis
A further step for finding the appropriate control consists of finding
a way to break the chain of
- predisposing circumstances, i.e. conditions that will
facilitate the occurence of the undesirable event (e.g.
large store of combustibles)
- initiating actions, i.e. an event that will start of the
undesirable event (e.g. a short circuit)
- sustaining causes, i.e. conditions that keep the
undesirable event going (e.g. poor fire fighting
equipment)
This method is called accident chain analysis.
Automate for iteration
Once a set of controls is selected, based upon the above premises,
and considering that a risk analysis exercise of this kind is
easily automated on a micro through the use of electronic spreadsheet
packages, the impact and frequencies could be reassessed with the
chosen controls in place. Recalculating the individual and total annual
loss expectancies will then indicate where, and to what degree,
further controls have to be selected.
Effectiveness of controls
Evaluating the effectiveness of these further controls is not very
simple and is once again a subjective process. One formula that has been
proposed is the following
Total Cost = ALE (1-E) +M
Where M is the cost of the control and E an effectiveness rate between
0 (not effective) and 1 (totally effective). An example : the unavailability
- first thing in the morning - of a reconciliation report of the
S.W.I.F.T. traffic of the previous day,has been estimated to yield an
annual loss expectancy (ALE) of 400,000 due to lost opportunities and
interest losses. The countermeasure would consist of a reconciliation
package and an extra evening shift to run it, at an annual cost of
100,000 including operational costs and ammortization of system and
software. If we now estimate that 8 out of 10 times the produced report
will be accurate, i.e. an effectiveness factor of 0.8 the total cost
or measure balance would be
TOTAL COST - 400,000 (1-0.8) | 100,000 = 180000
meaning we have-reduced the ALE by 320000 at a cost of 100000.
27
Work factor
In case of protection measures against intentional threats
the effectiveness rate will depend on the work factor (wf).
The work factor is all that the intruder requires to perpetrate
the fraud or crime such as knowledge, resources and time.
IBM's proposed expression for the degree of effectiveness is
E = ^^-'1
wf
where the work factor (wf) has a value of 1 for a given threat if
no countermeasures are implemented, i.e. the effort necessary to
carry into execution the threat in the absence of controls is 1.
It is then not too difficult to estimate the effort necessary to
overcome a chosen measure and express it in relation to the effort
when no measures are implemented.
When we put this relation work factor/degree of effectiveness in
graph
E ••
DEGREE OF
EFFECTIVENESS
1.0
0.9 .. ^ .
0.8 . ^^^""^
0.7 y^
/ , WF-1
0.6, / E= f ( W F ) =
/ WF I
0.5 / I
0.4 , /
0 3 . /
0.2 /
0.1 I
I 1 1 1 1 ' 1 1— • I • 1 »•
1 2 3 4 5 6 7 8 9 10 11 12 WF
WORK FACTOR
28.
we can see that no measures means 0 % effectiveness and that
100 % effectiveness is unattainable because the work factor, or
the effort for the intruder, has to be infinitely great. The graph
also illustrates a typical issue in security (because security can
be seen as the degree of effectiveness of all controls), and that
issue is that a small and well chosen amount of measures will
quickly give a fairly high degree of security,but that increasing
security from there onwards is relatively expensive.
Choosing between alternatives
A 'first approach for choosing between alternative measures is to choose
the measure with the best balance according the formula : Total cost =
annual loss expectancy x (1 - effectiveness factor) - measure cost.
However, a measure rarely works against one threat only, and vice versa
one threat may need more than one measure.
A method to select the best alternative would consist of the following
steps :
(1) build a matrix with related threats in descending
order of ALE versus the alternative measures.
(2) assign the cost of the measure to the threat it has
specifically been designed against and only consider the cost
necessary to make the measure work against that threat.
(3) evaluate the reduction in annual loss expectancy it causes
against all the threats considering the marginal cost
needed to make the measure effective against those threats
it was not designed for.
(4) select most cost beneficial measure.
Of course, the basis for selection can be different, if it was the largest
saving, as in our example, alternative C would be choosen. If the basis is
return on investment, alternative E would be the choice.
29.
\ ^ MEASURE
THREAT X
ALE=20000
2
ALEr 6000
3
ALEr 4000
SAVINGS
INVESTMENT
A
RrlOOOO
C= 8000
R= 2000
C= 0
R=4000
C=1000
7000
9000
1
1 B
1
R=20000
C= 8000
R= 2000
C= 3000
R= 3000
C= 2000
12000
13000
! c
1
R=20000
C= 8000
R=0
C=0
R= 3000
C= 2000
13000
10000
D
R=2000
C=8000
R=2000
C=0
R=3000
C=0
5000
1
8000
1
E
R=6000
CrO
R=2000
C=3000
R=4000
C=1000
8000
4000
Example of alternative selection matrix
(R=ALE reduction, C=cost of measure)
30.
CONCLUSION
In conclusion I would like to state that despite the need for a reasonable
amount of data, despite the need for a sound mathematical basis and adequate
management support, despite the fact that input may be subject to conservative
thinking, despite the fact that the outcome can be a source of misinterpretation
risk analysis is and will be, a thought-provoking process.
For those of you who consider it as an approach to evaluate systems,
I hope to have given an appropriate overview of approaches to take,
issues to consider and pitfalls to avoid. I hope you know now where to
start, but I cannot guarantee where you will end up because risk ^P
analysis is not a one-time job, it should be performed continuously in
order to cope with an ever changing environment.
31.
BIBLIOGRAPHY
IBM, Data Security Design Handbook, GB0F-75U2
IBM, SVENSKA AB, An Executive Guide to Data Security, G32Ü-5647
Y. CROMBE and L. WARTON, La Sécuricé des systëmes informatiques,
Institut d'Administration et de Gestion, University Catholique de
Louvain, April 1982.
COURTNEY R.H, Security Risk Assessment in Electronic Data
Processing, IBM Ref.TR 21.700, 1978.
CARROLL 3.M., Risk Management for Computer Security Managers, Report
n" 84, Dept. of Computer Science, University of Western Ontario,
1981.
LOWRANCE W.W., Of Acceptable Risk, Ed. W. Kaufmann, inc., 1976.
NBS, Technology Assessment : Methods for Measuring the Level of
Computer Security, produced by System Development Corporation TM-WD-
8Ü51/Ü11/2, Sept 1981.
NBS, Guidelines for Automatic Data Processing Physical Security and
Risk Management, FIPS publ. n° 31, 1974.
NBS, Guidelines for Automatic Data Processing Risk Analysis, FIPS
publ. n" 65, 1979.
BOUND W.A.J, and RUTH R.R., Risk Management - How can it become a
useful tool ? Proceedings of IFIP's 1st Security Conference, Stockholm,
May 1983.
KWONG J.F., Approaches to Justifying EDP Controls and Auditability
Provisions, COM-SAC Vol. 7 n" 2, Vol 8 n" 1 and Vol 8 n" 2, Jul 1980,
Jan 1981, Jul 1981.
Paper to be presented at the International symposium on auditing
in an advanced complex computerized environment.
Harriot Hotel, Amsterdam, 10-12 September, 1984.
Security and audit of operating systems (including manufacturers'
utilities).
Herman Roos, partner of Klynveld Main Goerdeler, Amsterdam.
Abstract: After defining the keywords advanced, complex,
operating-system and computer-system-security, the possible
scope of an operating system audit is considered. The functions
and related interfaces of operating systems are concisely
summarized. A degression is made on access control systems and
on protection systems. An approach to the audit of operating
systems is proposed; the nature of the considerations in
determining the scope in the context of a financial audit are
contemplated and the consequences for the required edp skills are
reviewed.
I-Introduction.
The symposium title contains the notions advanced and complex in
relation to a computerized environment. Both are entitled to
careful interpretation in relation to the object of this paper:
operating systems. Next the terms operating-system and operating-system-
security will be defined and the objectives of an
operating audit will be discussed.
1-Advanced computer-techniques.
Advanced can mean that the computer technique applied can be
considered of such a novel nature that at this point in time only
a relatively small proportion of the audit professionals and even
of the computer professionals are familiar with those techniques.
For instance the support provided by the operating system for the
management of a geographically distributed database.
2-Advanced computerized internal controls.
A second meaning of "advanced" can be the computerization of
procedures that are in the large majority of actual systems
manual procedures. This is of particular interest to auditors if
those procedures contain internal controls. For instance the
physical allocation of database files by means of indirection,
such that direct addressing through the physical address is not
possible because it is hidden by the system and thus cannot be
known by any programmer or operator.
3-Complexity.
It is by no means easy to give a precise meaning to the notionof
complexity. Complex can be defined in the context of a
computerized environment as the number of all possible
interactions or relationships between the individual components
1
of the system. Possibly some of the interactions and
relationships are valid and others may be invalid. Alternatively
complex may be thought of as the number of al 1 possible states of
the system. The larger the number of possible states, the more
complex the system.
What can be considered as a component depends on the level of
abstraction that is used. At the system level it might be a
terminal operator, the terminal he is operating on, the procedure
(or transaction) he is executing and the database as seen by the
procedure or transaction that is used by that procedure. At the
procedure level it might be a command string or an individual
program or a combination of both.
Complexity is introduced by the existence of different component-types
and of the existence of large quantities of each component-type
in a system. One of the typical interactions is the
simultaneous sharing of software and hardware components by
processes (a process is defined as the execution of a program).
So complexity might at the system level also be measured in terms
of the degree of concurrent data and program sharing.
4-Operating system.
Operating system for this paper comprises all of the software
aimed at the management and use of the hardware.
Software of this kind is normally supplied and maintained by the
computer manufacturer. Supplementary oparating software may
however be obtained from independent software manufacturers.
Major components are the control program and the programming
system.
The control program consists of a set of parameterized procedures
for the management of the resources (processor, I/O devices,
programs, files,etc.) of the computer system. The procedures are
activated and the parameters set or changed through a set of
commands. Examples are IPL(Initial Program Load), JCL(Job Control
Language) and operator commands.
The programming system provides the tools for the creation of
programs which can process user-defined data. Generally the
programming system also contains tools for programming at the
machine interface level. User data may be defined within the
program or independent of the program by means of a facility
offered by data base management software.
5-Security of a computer system.
The combination of the hardware and the supporting operating
software will be called a computer system. The consideration of
the operating system and the hardware it runs on as one system is
of interest when reviewing the effectiveness of the protection
mechanism in part IV of this paper.
A computer system can be considered secureif:
(a) it controls the access to the system,
(b) it protects processes belonging to different domains against
uncontrolled interference (a domain is defined as a set of
resources which is used by a set of particular processes),
(c) it controls the access to and the sharing of the programs
and data used by those processes.
Access to the system may imply the ability to create, modify and
delete programs, data and processes.
2
6-Reliability of a computer system.
Security as defined in par.5 presupposes the reliable functioning
of all the parts of the system which might affect the correctness
of the information embedded in the data that are stored and
manipulated in the system.
Denning (1), referring to Mel liar-Smith (2) defines a failure as
an event at which a system violates its specifications and a
fault as a mechanical or algorithmic defect which may generate an
error, an error being defined as an item of information which,
when processed by the normal algorithms of the system, will
produce a failure. Referring to Parnas (3) he states that a
system is correct as soon as it is free of faults and its
internal data contain no errors and that a system is reliable if
failures do not seriously impair its satisfactory operation.
When we relate this to practical experience we note that new
versions of an operating system tend to be less reliable than
more "stable" versions. This is not restricted to operating
systems but applies to any set of programs of reasonable
complexity.
7-The audit of operating systems.
The possible objectives of the audit of an operating system are
the determination of the effectiveness of the system with respect
to security and reliability and the determination of the efficiency.
A system is effective if it functions in accordance with its
specifications. It is efficient if it does so with minimal use of
resources.
Within the context of a financial audit the scope will mostly be
limited to the security aspects.
The reliability of an operating system is a prerequisite for its
security. The audit of the reliability of an operating system is
however a rather problematic matter in the computer science
community, let alone in a commercial environment. It is not
putting it too strongly that in actual practice the majority of
at least the larger commercially available systems are known to
contain faults which may produce failures and by inference cannot
be correct. Most of those systems appear however to function
reasonably well. This might mean that despite the occurrence of
failures most systems can be considered reasonably reliable in
the sense of Parnas.
Despite the lack of a precise meaning of reasonable reliability,
the security audit of an operating system within the context of a
financial audit may be an unavoidable part of the audit. This
depends on the professional judgement of the auditor in charge,
subject to the advancedness of the administrative system in the
sense of par.1/2. It is anyhow an optional part of an audit in a
computerized environment.
II Operating system functions and interfaces. ~
8-Functions and interfaces.
In general an operating system function supplies some service to
another operating system function or to an application system
function. Any system function is actually performed on request by
another function, be it a hardware function, a programmed
3
function or a human function.
The actual request for service is done through a communication
channel over which the request is passed by using a protocol and
a language which are observed and understood by both functions.
Language, protocol and communication channel together are the
interface.
Examples of communication channels are a keybord, a network
service, an I/O controller and the system bus.
Examples of protocols are the conventions to be observed when
communicating operator commands through a keybord, the
conventions for passing the parsed command to a program that
performs the requested function, the standard Cobol conventions
for passing parameters to a subprogram and recieving back the
result of the subprogram.
Examples of interface languages are the operator commands, the
editor commands, the programming language, the menues of an
online application.
By means of the interface language, input is provided and the
specific (sub)function is requested from a set of allowed
functions. From a functional point of view the language part of
the interface is the most essential one because it determines the
actual operations on the data in the system.
9- The relevance of the interfaces.
The relevance of the distinction between function and interface
for the subject of this paper is the difference in failure
probability.
A function was defined as "something" which provides "something
other" with a service. A function is performed by a set of
processes, that is driven by a program. A part of the program
defines the external interface in the form of a data structure
that is shared with the program which drives the requesting
process and an algorithm to manipulate and interprete the actual
data values of that data structure.
If the programs are relatively small a complex system will
require a large number of them in order to provide al 1 the
services which are expected from the system. Small programs tend
be little complex, which means that they are easy to test and to
debug.
Fitting them together in one system offers however the problem of
misunderstanding between modules. Everybody who has been involved
in integration tests of complex systems knows the problems of
detecting, determining the scope of and resolving interface
faults.
EDP auditors who have been involved in systems programming or in
operating systems audit probably know that an operating system is
a heavily modularized program with a very complicated set of data
structures (control blocks) which are shared among many different
modules.
Probably the greatest rise of a failure which produces a material
error is caused by interface faults. The communication channel is
merely a carrier of information, so most of the problems must be
encountered in the language part and the protocol part of the
interface.
4
The relative importance of interfaces with respect to system
failures is recently confirmed by a study of Basili and Perricone
(4). About 60% of all errors could be attributed to interface
errors and the rest to control and computation errors. They also
report that approx. 48% of the errors during the entire life
cycle were attributable to incorrect or misinterpreted functional
specifications or requirements.
The conclusion must be that both the human interface and the
program interface are a major source of errors. The definition of
error applied by Basili and Perricone is equivalent with the
definition of fault in par.6.
The previous definitions and examples are supposed to convey
sufficiently clear the notions of function and interface. In the
following description of the most essential features of the
working of a computer system, the several parts will not
explicitly be identified as function or interface.
This description will serve as a frame of reference for the
treatment of the access control system and the protection system
in parts III and IV of this paper.
10- Jobs and transactions.
A meaningful quantity of work carried out by a computer system is
called a "job" or a "transaction". Work can be the processing of
business data or the construction of a program.
The term "job" historically refers to the repetitive execution of
one or more related programs, where each subsequent execution
takes as input the next set of data from a stack of input data
which are all of the same type or a very limited set of types and
delivers a limited set of output to an output stack.When
the input stack is empty and the output is delivered onto the
output stack the job normally is completed and terminates.
The term "transaction" denotes the single execution of one or
more related programs which take input from a terminal device
during the execution of a set of (transaction) programs. A
transaction is activated from a terminal. This requires the
execution of a main program which responds to requests from activ
terminals. When a transaction is terminated the main program
remains active. When that main program is terminated no
transactions can be requested from the terminals. The main
program is usually started and stopped through commands from a
special terminal called operators terminal.
11- Processes and resources.
The execution of a job or a transaction requires the subsequent
invokation (creation, execution and destruction) of several
processes by a main process. Each process requires a number of
resources for its execution.
12- Management of jobs, transactions, processes and resources.
The function of an operating system is the management of jobs and
transactions and of the control of the use that is made by the
concurrently executing processes of the resources that are
available on the computer system in order to provide the desired
service to the users of the system.
The major components of an operating system are the control
program and the programming system.
5
13- Control program.
The control program consists of a set of parameterized programs
for the management of the resources of the computer system.
A program consists of algorithms and data structures (see
par.31).
Resources can be distinguished in hardware resources and software
resources. Hardware resources are the processors, the internal
memory, the input/output devices and the external memory.
Software resources are programs and datafiles.
Programs define operations on either user data or system data.
User data contain information about organizational events and
situations.
The system data structures contain the information about the
characteristics and the actual status of the resources. The
procedures are activated and the parameters set or changed
through a set of commands. Commands can be issued by a procedure
pertaining to a control program or by a user program or from an
I/O device. In a number of systems this command set can be
devided in a number of subsets which syntactically may differ
substantially. For instance a set for the generation and the
modification of the control program, another set for the initial
activation of the control program after the computer has been
powered on (initial program loading or IPL) , a set for the
management of program libraries and for the creation of programs
and data files, a set for the management of sets of processes
(job control language or JCL, including operator commands) and a
set for the use of service programs of several kinds (utilities
and special purpose programs like: sorts, performance monitors,
copiers, etc; each of those sets requires a particular
syntactical form).
14- I/O management.
Generally the quantity of program code and data which is
available on a computer system is much larger than can be
simultaneously stored in the internal memory. Programs and data
which are not required for the execution of a process are kept on
external memory. So it is clear that an important part of the
management of the resources implies the transport of information
between internal and external memory and between the input/output
devices and internal memory.
The transportation of information on most computer systems takes
place simultaneously with the execution of a process. This
inherent parallellism is one of the motives to devide the
operating system in several functional parts which can operate
simultaneously. The system that manages the transportation system
is generally called I/O manager.
In most administrative computer applications the quantity of data
which is available on the system is very large compared with the
amount of code of application programs. In order to be able to
find and move relatively small parts of data from and to external
memory a specialized file system takes care of the necessary
housekeeping. As it responds to the I/O manager it will not be
dealt with as a separate part. For the purpose of this paper its
function is considered to be included in the I/O manager.
6
15- Resource management.
Processes must be able to acquire the resources for the duration
needed to perform their functions. Resources comprise programs
and data, different levels of storage, and devices that permit
communication with the outside world like workstations, printers
and communication lines. Each resource type can only be used in
a way that fits its technical characteristics. In order to avoid
the necessity to explicitly take this into account while making
an application program, a major function of the resource
management is to hide those technical details for the application
programmer. This is an important reason for the subdivision of
the resource manager into different managers for each resource
type.The resource manager must be sure that a process that
requests a resource has the appropriate authority.
16- A hierarchy of memories.
Programs and data are stored in the computer. The various parts
in a computer where the program code and the data can reside can
be thought of as a hierarchy of memory devices with different
capacity and access speed. This encompasses (in order of
deminishing speed and capacity) registers, cache memory and
"slow" internal memory. A process can only execute if its
procedures and data are in the internal memory. Programs and data
when not directly needed by a process are swapped out of internal
memory onto directly accessable external storage media like
magnetic discs. This is the fouth memory category in the
hierarchy in speed and capacity. Memory devices like magnetic
tape and magnetic diskettes are primarily considered as I/O
devices for the off-line storage of programs and data and as
media for the exchange of data between different computer
systems.
17- Control program commands. '•
Frequently used command sets are IPL, JCL and operator :
commands and Query and Edit commands.
Through IPL an initial program is started and the data structures
of the control program are given the right values that correspond
to the actual resource configuration and the corresponding names
that will be applied in the commands.
Changes in the configuration generally require the generation of
a new version of the control program or a part of it through a
particular sysgen command set (for each control program layer a
specialized command set is required).
The JCL serves to create jobs, to set priorities among jobs, to
set conditions for the execution of parts of jobs (job-steps), to
indicate which programs must be executed for each job-step, and
to assign files to programs. With this information the control
program creates processes, controls their execution and
terminates them.
The operator commands provide the possibility to stop started
jobs, change priorities, etc. during the execution of jobs which
have previously been defined through the JCL and in some systems
al low for the change of the authority of a particular job or
transaction. ( For instance the start of a so-called privity job
under the GC0S3 system or the use of particular IMS/VS and VTAM
commands.)
Query commands permit the manipulation of formatted data.
Edit commands permit the manipulation of non-formatted (text)
data.
18- Programming system.
The programming system creates and destroys sofware resources. It
provides the tools for the creation of programs which can process
user-defined data and for the creation of data files.
For instance compilers, tracers and debuggers for high-level
programming languages like COBOL and PLl. But also data
dictionaries, library managers, test data generators, batch
terminal simulators, etc.
19- Linkers and loaders.
Normally the program that has been translated by a compiler is
not immmediately executable but must at least be bound to actual
storage locations and to procedures and data that are external to
the program This can be done at several points in time between
compilation and the execution of a particular program statement
(Myers,G.J.) (5). One way is to pass the compiled program through
a linkage editor which links it together with previously compiled
programs and in some cases to the required control program
procedures for I/O etc. The linked program contains the
information that is necessary to substitute the addresses of
entry points to the program and of external reference points
within the program by the correct relative addresses.
The process of determination of the correct addresses is called
relocation. This is normally done by the program loader. In some
systems the linking and the loading are done by a linking loader
which combines both functions. In one system those functions are
performed below the machine interface, invisible and untouchable
by the system user (IBM System/38) (6,7,8).
20- Machine interface programming.
Generally the programming system also contains tools for
programming at the machine interface level.
Machine interface is defined as the instruction set together with
the predefined use of specific parts of memory and of certain
registers. This can be considered equivalent with the term
machine architecture.
Programming at the machine interface level makes it possible to
modify compiled programs, linked programs and even loaded
programs with resolved relative addresses that are ready for
execution or already executing.
8
21- Data definition.
User data may be defined within the program or independent of the
program by means of a facility offered by data base management
software.
The operating system supplied by the computer manufacturer may
provide extensive database and data communications facilities.
Alternatively those facilities may be provided by independent
manufacturers' software as an extension to the operating system
provided by the computer manufacturer.
Ill Security and the access control system. •
22- Security policy, security management and security mechanism.
A distinction must be made between security policy and security
management (Wilkes) (9).
A security policy consists of a set of statements which expresses
the objectives of the security policy. It contains intentions and
guidelines.
Security management is the execution of the security policy.
In practice this includes the definition of domains and the
authorities of specific persons to those domains in a set of
security rules.
The implementation of those rules requires a security mechanism
which accepts the rules and grants or denies access to the defined
domains according to those rules. This can be considered the
computerized part of the security management. .
23- Domain.
A domain is defined as a set of resources that is used by a set
of particular processes. On the job-level or the transaction
level it may be thought of as a series of programs and related
files, the execution of which requires the same level of
authorization or the same level of security clearance. The
execution of the set of programs pertaining to such a domain is
done through one basic process that invokes other processes, some
of which execute programs of the defined domain and others
provide particular services like the execution of I/O.
One of the possible domains consists of a set of processes that
execute the programs which operate on the domain definitions.
24- The security mechanism.
The security mechanism consists of three major parts:
-An access control system which is a computerized set of
procedures which relates -e.g. in a set of authorization tables-the
identifiers of system users to the identifiers of system
resources and maintains the kind of the relationship (read,
write, execute).
-A password procedure which enables the system to check the
identity of a supposed system user.
-A protection system which dynamically enforces the use of system
resources in conformity with the authorization tables.
9
25- Access control system: the layered approach.
Historically the mechanism for the management and control of
access to the computer system resources is not designed as a part
of the operating system.
The desireability of an access control mechanism was only felt
when it became feasible to design systems which provide remote on
line use of data files. Those systems required a transaction
monitor system that was not included in the untill then prevaling
batch oriented operating software. So-called TP-monitors were
added for that purpose. This principle of extending the basic
operating system functions with additional software layers is
still predominantly applied and perhaps the main raison d'etre
for independent operating sofware suppliers.
The consequence of this approach is that in actual practice in
most cases the access control mechanism and the tools to manage
this mechanism are scattered over several pieces of operating
software. (Eg. the IB