New challenges - C-FER TECH

5

Transcript of New challenges - C-FER TECH

Page 1: New challenges - C-FER TECH
Page 2: New challenges - C-FER TECH

New challenges for analysis?

Thomas Dessein, C-FER, Canada, discusses probabilistic defect management.

P ipeline operators have never been in a better position to ensure the safe operation of their assets. Database systems are now commonly used to track all data pertaining to the pipeline operating history,

the right-of-way (ROW) conditions and the current state of damage to the pipelines, and this information is readily available to the employees responsible for maintaining the integrity of the pipeline network. With these systems, operators can now collate, align and track the degradation of known defects on the network. The use of database systems, combined with the improving accuracy of inline inspection (ILI) tools and the ability to combine multiple ILI sensors in a single tool run to identify and size cracks, corrosion, dents and complex features, means that operators have immediate access to all the data necessary for pipeline integrity management.

However, pipeline defect data is not only more available, but also more abundant than ever before. As the detection and reporting thresholds of ILI tools improve, the number of reported features increases and the overwhelming amount of resulting data can pose new challenges for analysis.

Traditional approaches to assessing defects, particularly on problem lines with a high prevalence of manufacturing anomalies or poor material properties, can leave operators with long lists of defects to repair. The conservative assumptions that are commonly made for defect size, pipe material properties and defect growth rates when performing time-to-failure analysis mean that the estimated remaining life of most defects is inevitably under-predicted. Even more concerning, however, is that this approach can be non-conservative; adding the ILI tool measurement error tolerance (at 80% confidence) to the reported defect depth, as is commonly done, effectively results in the actual defect size being under-predicted 10% of the time.

Given these issues, industry operators and regulators are increasingly accepting and utilising the probabilistic assessment methodology to assess and manage pipeline reliability. These methods comprehensively and accurately account for all sources of uncertainty and weigh the influence of each appropriately, to not only account for the worst-case scenarios, but to also assess how likely they are to occur.

Page 3: New challenges - C-FER TECH

Elements of a probabilistic failure modelThe Pipeline and Hazardous Materials Safety Administration (PHMSA) recently made publicly available a report (project number DTPH5615T00003L) by S. Koduru, R. Adianto and J. Skow that it commissioned on quantitative risk models in the pipeline and other industries. The report outlines the key elements to consider when applying a probabilistic model:

) Types of failure frequency models.

) Consequence measures for pipeline failures.

) Data collection, quality and data management.

) Risk analysis methods.

) Model verification and benchmarking.

The following discussion describes the first key item above for the assessment of individual defects. A failure frequency model should account for the following elements, as illustrated in Figure 1.

Variability in pipe properties and dimensionsWhile it is common to assume the specified minimum properties such as yield strength (SMYS), tensile strength

(SMTS) or toughness, these assumptions are typically very conservative compared to the real properties of a random sample of the pipeline material and, in some cases, can even overestimate the true value of the property. For an accurate assessment of the likelihood of failure, the pipe properties and dimensions, such as true wall thickness, should be represented by probability distributions (Figure 1). Guidance and examples for how to select these distributions are provided in Annex O of ‘CSA Z662-15 Oil and Gas Pipeline Systems’, published by the Canadian Standards Association.

Measurement uncertaintyThe tolerance and confidence level of the measurement accuracy of ILI tools can be readily converted to appropriate distributions to accurately represent the likelihood of the range of possible true defect dimensions. The ability of the tools to detect and correctly identify defects can also be used to estimate the size and density of the undetected defects on a line, which can then be included in the assessment.

Model uncertaintyThe models used to predict defect failure cannot ever fully match the variability that exists in the underlying test data used to develop and calibrate the models. This generally leads to the selection of model parameters that conservatively estimate the capacity of the pipe to withstand load, which incorporates an additional safety factor that is not usually accounted for in defect assessments. The variability between the predictions of the model and the true performance of a damaged pipe should be accounted for in predicting the expected failure rates.

Defect growth ratesA common practice for assessing corrosion defects is to match defects between subsequent ILI runs and determine a linear growth rate from the change in maximum reported depths between the two runs. This approach compounds the measurement error in both readings, which can lead to negative growth rates (over-call in first run and under-

call in second run) and excessively large growth rates (under-call in first run and over-call in second run), especially if these rules are applied to defects that were not found in the first run. While statistical methods can be used to infer the underlying distribution of defect growth rates of a population of defects, the measurement error can often overwhelm the results of these assessments, making it difficult to characterise

Figure 1. Elements of a probabilistic failure model.

Figure 2. Example of a small number of Monte Carlo simulation realisations. Left: high uncertainty in pressure loading. Right: high measurement uncertainty.

World Pipelines / REPRINTED FROM JANUARY 2018

Page 4: New challenges - C-FER TECH

growth of a single defect. The solution is to acknowledge this uncertainty in the analysis and allow for a range of possible growth rates.

Accounting for all sources of uncertainty in assessing the potential for defect failure (in the near future) provides the most accurate way to predict the reliability of a pipeline. However, this approach requires increased computational effort, typically on the order of 1 - 10 million simulations of possible realisations of the conditions of the pipeline at each defect being assessed. An example result generated using Monte Carlo simulation with just two thousand realisations for a single defect is plotted in Figure 2, where the peak surge pressure at the location of the defect is not known with certainty due to pressure hammer effects (left hand side), and where the ILI tool measurement accuracy is diminished due to the presence of a weld near the defect (right hand side). This quantity of computational effort may well be out of reach for most spreadsheet models, but with custom-built software that is today being deployed to the cloud, these results can be generated easily and are available in minutes, even for the full set of defects from an ILI run.

Reliability-based defect management case studyTo properly manage the integrity of a pipeline, all of the threats should be modelled and combined before

determining whether the expected reliability is considered acceptable. Figure 3 shows the breakdown of reported incidents for liquid lines in the PHMSA database over the period 2010 - 2016, and shows that external corrosion, external interference from digging equipment and equipment failure are the major threats to the US network.

For simplicity, this case study shows how probabilistic defect assessments can be applied to defects caused by corrosion and equipment impact only. The parameters for the example pipeline are summarised in Table 1.

The ROW conditions that affect the likelihood of equipment impact related failures are listed in Table 2.

The pipeline ROW transitions from agricultural land through an industrial area on the outskirts of a city and finally into an urban zone; along the route, the pipeline crosses two sensitive waterbodies (rivers in this case). These conditions were used to determine the frequency and depth of digging that is expected to occur on the pipeline route, as well as the environmental sensitivity to a spill.

Figure 3. Reportable incidents for hazardous liquids pipelines PHMSA (2010 - 2016).

Table 1. Pipeline parameters

Parameter Value

Pipe grade X52

Diameter 762 mm (30 in.)

Wall thickness 9.525 mm (3/8 in.)

Length 71.3 km (44.3 miles)

Product Medium crude

Maximum allowable operating pressure 6550 kPa (950 psi)

Depth of cover 0.90 m (2.95 ft)

Installation date 26 November 1972

Table 2. ROW conditions

Parameter Value

Alignment markers – above-ground No

Alignment markers – buried No

Alignment markers – explicit signage At selected strategic

locations

Dig notification requirement Required but not enforced

Dig notification response Locate and mark with no site

supervision

Mechanical protection None

One-call system type Multiple systems

Public awareness level Average

ROW indication Continuous but limited

indication

Surveillance interval Bi-weekly

Surveillance method Aerial

Figure 4. Corrosion defect classification.

REPRINTED FROM JANUARY 2018 / World Pipelines

Page 5: New challenges - C-FER TECH

The combination of environmental sensitivity and type of liquid transported was used to develop thresholds for the expected failure rate that would ensure an acceptable reliability, using a procedure based on the methodology described in an IPC paper by M. Nessim, R. Adianto and M. Stephens titled ‘Limit States Design for Onshore Pipelines – Methodology and Implications’ (IPC2014-33434).

Figure 4 shows the size and shape categories of the corrosion defects measured on the line and the ILI tool depth measurement tolerance, by category, at an 80% confidence. The categories used here are those defined by the Pipeline Operators Forum.

The calculated rates of failure due to equipment impact and growth of corrosion defects are shown for year 5 after the date of inspection in Figure 5. These rates can be maintained below the reliability threshold by removing defects from the line through targeted mitigation digs, as shown in Figure 6. Note that the equivalent rupture rate shown in these figures is a combination of the calculated rupture rate and the large leak rate, pro-rated by the relative consequence, as described in IPC2014-33434.

Table 3 lists the defects to be repaired or removed through targeted digs before the next planned ILI run. They are in the order in which they should be executed to maintain the acceptable reliability. The plan also considers the proximity of defects to each other and the interactions between hazard zones, which makes it possible to optimise repairs by targeting multiple defects that would share the same dig site, providing a cost-effective means of maintaining the integrity of the line. This is illustrated with the last two defects in Table 3, which are in close proximity to each other.

ConclusionThe proliferation of comprehensive pipeline information databases and the improving ability of ILI tools to identify and size cracks, corrosion, dents and complex features, mean that operators have never been in a better position to ensure the safe operation of their assets. With the limitations of traditional methods to assess the time-to-failure of measured defects, industry operators and regulators are increasingly accepting and utilising the probabilistic assessment methodology to assess and manage pipeline reliability. These methods comprehensively and accurately account for all sources of uncertainty and weight the influence of each appropriately to not only account for

the worst-case scenarios, but to also assess how likely they are to occur.

These methods can be used to determine the combined influence of all relevant threats to the reliability of a pipeline system over time. The approach requires increased computational effort, typically on the order of 1 - 10 million simulations for each defect being assessed. But with custom-built software that is today being deployed to the cloud, these results can be generated easily and are available in minutes. This is possible even for the full set of defects from an ILI run, allowing operators to select a cost optimal mitigation strategy and demonstrate to regulators that all sources of uncertainty have been properly accounted for.

Figure 6. Calculated pipeline reliability: effect of targeted digs on reliability.

Figure 5. Calculated pipeline reliability: equivalent rupture rate in year five after mitigation digs.

Table 3. Reliability-based dig plan

Repair before year

Repair order

Defect location (km)

Threshold exceeded

4 1 37.611 Small leak

4 2 0.577 Burst

5 1 37.563 Small leak

5 2 40.983 Small leak

5 3 4.634 Small leak

5 4 0.492 Burst, small leak

5 5 59.106 Small leak

5 6 27.202 Burst

5 7 1.191 Burst

5 7 1.192 Burst

World Pipelines / REPRINTED FROM JANUARY 2018