the red penguin
HOME ABOUT SITEMAP BLOG LOGIN

7. Data integrity and security

7.01 What could go wrong?

Sources of data errors include:

  • Bad data – automated integrity checking would help to prevent this.
  • Poor application logic – this can me mitigated with normalisation.
  • Failed database operations – usually the biggest problem; easy to handle for atomic operations but very hard otherwise. Database snapshots and transactional
    database can help; saving a state so it can be rolled back to that state if there was a problem later.

  • Malicious user activity – helped by control of user privileges.

7.02 How do we reduce risk of error?

  • We could specify the PRIMARY or a UNIQUE KEY and in doing so, we can ensure that every row was identifiable and unique, and that way we could avoid errors, and the inconsistencies that could arise from the same data occurring in multiple different rows.
  • We can specify a FOREIGN KEY which allows us to make sure that a reference is maintainable. So if something changes or if we tried to make a change, that would affect multiple tables, thanks to a join.
  • We can also specify very straightforward simple validation constraints. That if the data that’s put in doesn’t match a pattern that we would expect it to obey to be suitable, we can reject it at that point of entry.

These checks won’t check for the truth of data – they just check its validity.

Data normalisation is another way to prevent errors taking place.

Tuesday 9 November 2021, 403 views


Leave a Reply

Your email address will not be published. Required fields are marked *