This getting started guide provides instructions for using Code-Point with Polygons in different software applications. Users with limited technical knowledge will be able to follow this guide.
Code-Point with Polygons is a dataset that contains the notional area of postcode units, allowing customers to display and analyse any data collected at the postcode level.
The polygons within the product are derived from georeferenced Royal Mail Postal Address File (PAF) delivery addresses. A process is undertaken to create a set of polygons around individual address records within a postcode. This is called a Thiessen process and the polygons are the result of a mathematical computation that creates polygons from point data. In this way, mathematically consistent boundaries are created between distinct postcode groups, creating this notional boundary set.
Postcode unit boundaries are, by definition, only the delivery point or collection of delivery points that constitutes the postcode units. The boundary is therefore a notional one, the position of which is arbitrary. What has been created, however, is a set of boundaries that follows a consistent logic and portrays the notional footprint of each postcode unit. The boundary encloses every delivery address for which positional data of sufficient quality is available, and which follows major physical features that could reasonably be regarded as part of the postcode boundary.
This getting started guide focuses on using the product in Shapefile, TAB and MID / MIF formats. For guidance on using the product in vector tiles or GeoPackage formats please see the following getting started guides:
Ordnance Survey measures the data in its products in one or more of the ways set out in the definitions of data measures table below.
Data measure | Definition | Sub-measure | Definition |
---|---|---|---|
* When testing the data according to the dataset specification against the ‘real world’ or reference dataset.
Completeness
Presence and absence of features against the specified data content*
Omission
Features representing objects that conform to the specified data content but are not present in the data.
Commission
Features representing objects that do not conform to the specified data content but are present in the data.
Logical consistency
Degree of adherence to logical rules of data structure, attribution and relationships
Conceptual consistency
How closely the data follows the conceptual rules (or model).
Domain consistency
How closely the data values in the dataset match the range of values in the dataset specification.
Format consistency
The physical structure (syntax): how closely the data stored and delivered fits the database schema and agreed supply formats.
Topological consistency
The explicit topological references between features (connectivity) – according to specification.
Positional accuracy
Accuracy of the position of features
Absolute accuracy
How closely the coordinates of a point in the dataset agree with the coordinates of the same point on the ground (in British National Grid EPSG: 27700 for shapefile, TAB, MID/MIF and GeoPackage formats, and Web Mercator projection EPSG: 3857 for vector tile format).
Relative accuracy
Positional consistency of a data point or feature in relation to other local data points or features within the same or another reference dataset.
Geometric fidelity
The ‘trueness’ of features to the shapes and alignments of the objects they represent*.
Temporal accuracy
Accuracy of temporal attributes and temporal relationships of features
Temporal consistency
How well-ordered events are recorded in the dataset (life cycles).
Temporal validity (currency)
Validity of data with respect to time: the amount of real-world change that has been incorporated in the dataset that is scheduled for capture under current specifications.
Thematic accuracy (attribute accuracy)
Classification of features and their attributes
Classification correctness
How accurately the attributes within the dataset record the information about objects*.
The current specification represents the postcodes in a set format that defines the postcodes as having an inward and outward postcode ‘code’. Code-Point and Code-Point with polygon postcodes have 0, 1 or 2 spaces between the in and out code.
The table below identifies how postcodes are currently shown in the data.
Postcode structure | Number of spaces |
---|---|
The Code-Point and Code-Point with polygons postcodes are currently represented as above; however, there may be a user requirement to represent each postcode in a uniformed single-space format.
The aim of this section is to offer some guidance on how to process the Code-Point and the related Code-Point with polygons data to generate postcodes with a single space.
The single-space instructions are applicable to both the postcode point and unit polygon products. Microsoft Excel, Microsoft Access, MapInfo, ESRI and QGIS GIS formats have been included to provide guidance when using comma-separated values (CSV) and other formats.
The underlying theory for all of the methods is principally the same, in that all current spaces are removed and then a single space added before the third character from the right.
The NTF format is not included in this chapter as it is not compatible to a single-space format.
These instructions apply to postcode units only, and not to vertical streets (which are only found in the Code-Point with polygons dataset). To ensure that the vertical street references are not corrupted, remove them from your data before applying these instructions.
AANNNAA
0 spaces (represented as AANNNAA) for example: PO143RW
ANN NAA
1 space (represented as ANN<>NAA) for example: PO14 3RW
AN NAA
2 spaces (represented as AN<><>NAA) for example: B1 5AP