999精品在线视频,手机成人午夜在线视频,久久不卡国产精品无码,中日无码在线观看,成人av手机在线观看,日韩精品亚洲一区中文字幕,亚洲av无码人妻,四虎国产在线观看 ?

Energy Theft Identification Using Adaboost Ensembler in the Smart Grids

2022-08-24 12:58:52MuhammadIrfanNasirAyubFaisalAlthobianiZainAliMuhammadIdreesSaeedUllahSaifurRahmanAbdullahSaeedAlwadieSalehMohammedGhonaimHeshamAbdushkourFahadSalemAlkahtaniSamarAlqhtaniandPiotrGas
Computers Materials&Continua 2022年7期

Muhammad Irfan, Nasir Ayub, Faisal Althobiani, Zain Ali, Muhammad Idrees, Saeed Ullah,Saifur Rahman, Abdullah Saeed Alwadie, Saleh Mohammed Ghonaim, Hesham Abdushkour,Fahad Salem Alkahtani, Samar Alqhtaniand Piotr Gas

1Electrical Engineering Department, College of Engineering, Najran University Saudi Arabia, Najran, 61441, Saudi Arabia

2Department of Computer Science, Federal Urdu University of Science and Technology, Islamabad, 44000, Pakistan

3Faculty of Maritime Studies, King Abdulaziz University, Jeddah, 21589, Saudi Arabia

4Department of Electrical Engineering, HITEC University, Taxila, 47080, Pakistan

5Department of Computer Science and Engineering, University of Engineering and Technology, Narowal Campus, Lahore,54000, Pakistan

6College of Computer Science and Information Systems, Najran University, Najran, 61441, Saudi Arabia

7Department of Electrical and Power Engineering, AGH University of Science and Technology Mickiewicza 30 Avenue,Krakow, 30-059, Poland

Abstract: One of the major concerns for the utilities in the Smart Grid (SG)is electricity theft.With the implementation of smart meters, the frequency of energy usage and data collection from smart homes has increased, which makes it possible for advanced data analysis that was not previously possible.For this purpose, we have taken historical data of energy thieves and normal users.To avoid imbalance observation, biased estimates, we applied the interpolation method.Furthermore, the data unbalancing issue is resolved in this paper by Nearmiss undersampling technique and makes the data suitable for further processing.By proposing an improved version of Zeiler and Fergus Net (ZFNet) as a feature extraction approach, we had able to reduce the model’s time complexity.To minimize the overfitting issues,increase the training accuracy and reduce the training loss, we have proposed an enhanced method by merging Adaptive Boosting (AdaBoost) classifier with Coronavirus Herd Immunity Optimizer (CHIO) and Forensic based Investigation Optimizer (FBIO).In terms of low computational complexity,minimized over-fitting problems on a large quantity of data, reduced training time and training loss and increased training accuracy, our model outperformsthe benchmark scheme.Our proposed algorithms Ada-CHIO andAda-FBIO,havethelow Mean Average Percentage Error(MAPE)valueoferror,i.e.,6.8%and 9.5%, respectively.Furthermore, due to the stability of our model our proposed algorithms Ada-CHIO and Ada-FBIO have achieved the accuracy of 93% and 90%.Statistical analysis shows that the hypothesis we proved using statistics is authentic for the proposed technique against benchmark algorithms, which also depicts the superiority of our proposed techniques

Keywords: Smart grids and meters; electricity theft detection; machine learning; AdaBoost; optimization techniques

1 Introduction

By adding new transmission technology, i.e., smart meters, a traditional power network becomes an SG infrastructure.Current findings in [1] demonstrate that the SG can help to control electrical power efficiently.To create the ultimate use of deployed resources [2], the SG framework has created the platform [3] for transactive energy and short-term load balancing.The work in [4] proposes a hierarchical energy delivery system that avoids peak hours and exchanges more power for less money.To reduce the unpredictable nature of green energy, a strategy based on information-gap decision theory [5] is applied.In an SG, the meter reading shares data among energy users and also the infrastructure.It stores an immense amount of data, including consumers’electrical energy usage.Artificial intelligence techniques may manipulate these data to map customer energy usage trends and reliably detect power thieves through using them.

Power grids all around the world are concerned with energy losses in electricity generation and transmission.Energy losses are generally known as Non-Technical Loss (NTL) and Technical Losses(TL) [6].TLs are caused by the internal functioning of power grid components such as transformers and transmission lines in the transmission of electricity; NTLs is defined as the difference between total losses and TLs caused mostly by energy theft.Physical attacks such as line tapping, meter smashing and interruption meter reading are the most common ways to stop power [7].As a result,the revenue loss of power utilities will arise from these electricity fraud activities.Herein, the cost of power theft in the United States (US) is estimated to be about $4.5 billion a year [8].Nonetheless,it is believed that electricity theft costs the world’s power systems more than $20 billion per year [9].As a result of the advent of digital metering infrastructure in SGs, utility companies have collected massive volumes of actual electricity usage data from smart meters, allowing them to track power loss[10].The Advanced Meter Infrastructure (AMI) network, on the other hand, makes new energy theft attacks possible.AMI attacks can take a range of forms, including cyber-attacks and digital devices.Unauthorized line diversions,meter data comparisons and testing problematic equipment or hardware are also critical strategies for identifying electricity theft.Whereas, these solutions are highly costly andtime-consuming when inspecting all of the meters in a system [11].

Special devices, such as transmission transformers and wireless sensors, use the state-based recognition concept [12].These techniques can detect energy theft, but they necessitate the procurement of real-time system topology and additional physical measurements, which can be challenging to obtain.Game-based control systems create a game involving power utility and theft, then use the game equilibrium to generate various normal and abnormal behavior distributions.They achieve a low cost and a fair outcome in minimizing energy theft, as detailed in [13].Although evaluating the utility function of each player is still a challenge (e.g., regulators, marketers and fraudsters).Deep learning and machine learning approaches are examples of artificial intelligence-focused methods.There are two types of machine learning systems clustering models and classification, as described in [14].While the methods of detecting ma-chine learning described above are revolutionary and exceptional, their efficiency is still not adequate for practice.The majority of these approaches, for example, focus on manual feature extraction due to their limited capacity to manage data with several dimensions.The standard deviation,mean, minimum and maximum of costs and extra are all hand-designed functions.Manually removing functionality from smart meter data is time-consuming and tedious and it skips out capturing 2D features.

From the aforementioned literature, the current Electricity Theft Detection (ETD) methods’results are relevant.These processes, on the other hand, have certain limitations, which are outlined as follows.1) Traditional ETD employsmanual processes, such as human checking ofmeter readings and manual catching of electricity transmission lines.On the other hand, these tactics require an additional cost for the inspection teams that will be hiring.2) The False PositiveRate (FPR) of game theory-based approaches is high, while the recognition rate is low.3) The state-based approach is costly, although the installation of hardware needs an extra cost [15].4) The handling of unbalanced data is a big concern in ETD using machine learning techniques.This issue is left unresolved in conventional models.Some authors employ the Synthetic Minority Oversampling Technique (SMOTE) and Rusboost approaches, both of which result in information loss and overfitting.5) In some instances, the available data includes inaccurate data that minimize the precision of the classification [16].6) For big data, traditional machine learning strategies like the Logistic Regression (LR) and Support Vector Machine (SVM) have poor classification efficiency [17].7) Themachine learning techniques have the overfitting problem on a large amount of ETD data [18].

We employed an interpolation approach to modify missing values, normalization methods and the three-sigma rule to pre-process the electrical data to address the aforementioned issues, namely missing values and eliminating outliers in the data.For managing the imbalanced dataset, a Near-miss algorithm is applied.Afterward, the balanced data is fed into the ZFNet module for feature extraction and ZFNet is opted to detect the irregular patterns.Finally, the obtained features are forwarded to the AdaBoost-based FBIO and AdaBoost-based CHIO Algorithm module for classification.To this end, the following points discuss the paper’s} main application.1) The proposed strategy offers a solution to an issue in the power grid, such as energy waste due to electricity theft.2) Utility companies can effectively enforce this model by classifying electricity criminals and reducing energy waste using current power consumption data.3) It is possible to use the suggested solution against all forms of customers that steal energy.

Herein, the following are the key contributions:

1.We have stabilized, balance the data, removed unbiased estimates and ensure valid conclusions with the Interpolation method and near-miss algorithm along with the proposed Enhanced version of the ZFNet technique.

2.Also, we minimized the overfitting issues, computational complexity by 9%, lessmodel training time, and model training loss by a proposed enhanced version of the AdaBoost classifier.

3.Less computation time while utilizing as few resources as feasible.Anomalous and normal user classification accuracy and stability are achieved using optimation methods; CHIO and FBIO.Optimization techniques are merged with AdaBoost to fine-tune the classifier by defining a subset of its parameters.

4.Extensive simulations are carried out on actual electricity consumption collection of data are used as output evaluators for comparative analysis, accuracy, recall, F1-score, Kruskal Test, Mann-Whitney Test, Paired Student’s Test, ANOVA Test, Student’s Test, Pearson’s Test,Wilcoxon Test, Spearman’s Test, Chi-Squared Test, Kendalla’s Test Obtaining Operational Characteristics Area Under Curve (ROC-AUC), MAPE, Mean Average Error (MAE) and Root Mean Square Error (RMSE)

2 Related Work

Recently, researchers have applied different techniques to track energy theft.It is possible to classify these methods into three categories: game theory, state-based strategies and machine learning.To detect power theft, state-based solutions use external hardware devices such as distribution transformers, wireless sensors and smart meters [19].So the need for extra hardware resources,this approach has a high deployment cost.In a theory-based game scheme, the power suppliers and energy criminals are thought to be playing a game.The difference between positive consumer behavior and electricity thieves may be utilized to assess the game’s outcome [20].After all, describing the utility function with all players in the game is extremely difficult.For ETD, machine learning approaches are commonly used.They are further categorized into supervised (classification) and unsupervised (clustering) approaches, which are then applied to unlabeled datasets to distinguish between illegitimate and legitimate consumers.Tab.1 presents the existing approaches used by ETD,containing their inputs and their shortcomings.

Based on supervised learning, our solution is proposed.Therefore, the details of recent advances made in supervised learning methods will be reviewed.LR and SVM are often used for ETD [21].When the dataset is small, these strategies work better.However, when the dataset is wide and highly imbalanced, these strategies are not successful.A hybrid model combining Long Short-Term Memory(LSTM) and Convolutional Neural Network (CNN) was proposed in [22].The CNN gathers features,while the LSTM refines them to distinguish between normal consumers and energy thieves.For an unbalanced dataset, the SMOTE is applied to make it balanced.Strong results have been obtained,i.e., 90% accuracy and 87% recall.The over-fitting problem caused by the inclusion of duplicate data through SMOTE is not taken into account.The author proposed a hybridETDmodel based onLSTM and Multi-Layer Perception (MLP) in [23].LSTM and MLP are used to combine additional data and energy usage data; this model describes the NTL.The problem of unbalanced results, on the other hand, is not resolved until classification.Besides, because of training on fewer data, the FPR of this model is high.When 80% of the data was used in training, the Precision Recall (PR-AUC) reached 54.5%.

Table 1: Summary of the related works

Table 1: Continued

To detect electricity theft, the author of [32] addresses gradient loss by improving the internal structure of LSTM.The model of GMM and LSTM is used in this methodology.The results from this model were fantastic.90.1% accuracy and 91.9% memory, in other words.However, the execution time for this model is extended.For energy theft detection, the authors use the CNN model [33].According to the classification by fully interconnected layers [34], the CNN contributes to the degradation of generalization.For final classification, the authors used the RF.Besides, the imbalanced data is handled using SMOTE.Using the decision trees with the CNN, the generalized performance is achieved.SMOTE, on the other hand, creates synthetic data, which leads to the issue of overfitting.For NTL detection, the authors in [35] employed a gradient Boosting theft detector.This approach refines precision by learning from a decision tree ensemble, demonstrating the model’s usefulness.The simulation indicates that a gradient boosting theft detector outperforms most machine learning methods.

3 Proposed Methodology

Fig.1 depicts the proposed model for our ETD.There are five phases to our model.Firstly, the data is loaded and preprocessed using methods such as missing value interpolation and normalization.Secondly, the imbalanced data is forwarded as input to the Near-miss technique for undersampling.Thirdly, the imbalanced data is then forwarded to ZFNet for feature extraction.Fourthly, the data is passed to the proposed classifier AdaBoost, which CHIO and FBIO optimize.After classification, we have performed performance evaluation using performance metrics and performance error metrics,i.e., MAPE, Mean Square Error (MSE), RMSE, F1-score, precision, accuracy, Recall.Furthermore,statistical analysis is also performed on the proposed method.Our proposed algorithm is compared with the benchmark techniques.

Figure 1: Proposed electricity theft detection model

3.1 Dataset Description

The proposed system is being evaluated using State Grid Corporation of China (SGCC) [36]smart meter data.The data used in this paper is time-series data, which claims that data is collected at regular time intervals.1032 is the input dimensions or attributes.Three years is the duration of the data obtained.It consists of data from 42,372 customers on electricity consumption.The data released also provides the ground reality that 9% of the overall customers are energy thieves, which is shown in Tab.2.

Table 2: Data description and details

In the data on energy usage, trustworthy users have different levels of consumption than electricity thieves.Electricity thieves have erratic energy usage patterns and because of meter tampering, their energy consumption is often low.Besides, honest customers have a daily frequency in their pattern of consumption.Machine learning algorithms use data from smart meters to detect consumers’unusual consumption patterns to identify them as energy thieves.

3.2 Preprocessing of Data

The data is preprocessed using the interpolation approach, which improves the accuracy of the results.Eq.(1), which gives the interpolation technique [19], given that:

wherenlindicates input value/data.

To remove outliers, the three-sigma technique is applied to the input data.These outliers are aware that energy use spikes on non-working days.Using Eq.(2) [23], we recreate these values using the Three Sigma rule of thumb:

The average value of n is avg(n), while the standard deviation is std(n).This method works well for dealing with outliers.To standardize the data between the 1 and 0 scales, we employed the Min-Max scaling technique, interpolation and the three-sigma rule.It is required because neural networks function poorly when the findings are inconsistent [24].Data normalization improves the training phase of deep learning models by providing the data on a standard scale.Eq.(3) is used to normalize the data as follows:

where the normalized value is represented by M’.The consistency of the input data determines machinelearning’s algorithms efficiency.The quality and dependability of the data utilized in these models areimproved by pre-processing them.

3.3 Balancing of Input Dataset

In the SGCC dataset, the number of typical energy consumers outnumbers the number of thieves.This data mismatch is a serious problem in ETD thatmust be addressed; otherwise, the classifier wouldbe biased towards the majority class, resulting in poor performance [25].

Motivated by SMOTEBoost [26] and SMOTE [27], helping to navigate the imbalanced collection of results.To minimize the difference in quantity between the two types of data, sampling-based methods under-sample or over-sample the imbalanced dataset.To reduce the majority class occurrences, under-sampling automatically dismisses the majority class’s entries.This strategy reduces the amount of the dataset, which is beneficial from a statistical view; However, the random elimination might be omitted and the remaining data could be a good sample representation or not.The model created with the test data may produce a less accurate result.It seeks to balance class representation by removing instances of the majority class at random.When two different classes have examples that are substantially similar to one another, we delete all of the instances of the majority class to optimize the space available for comparing the two classes.This contributes to the classifying procedure.

Methods based on near-neighbors are commonly in most under-sampling techniques.This is used to eliminate the issue of information loss.A brief explanation of how some of the near-neighbor approaches work:

?Stage 1: The method begins by identifying the distinctions between majority and minority class instances.In this circumstance, the majority class must be under-represented.

?Stage 2: The majority of N class instances with the shortest distances from the minority class are then selected.

?Stage3:The majority class will have k*n instances if the minority class has k instances, resulting in the closest process.

TheNear-miss technique for selecting n closest examples in the majority class can be implemented in a variety of ways:

?NearMiss Variant 1: Selects majority class samples with the shortest average distances to the nearest k occurrences of the minority class.

?NearMiss Variant 2: Selects samples from the majority class that have the shortest average distances to the minority class’s furthest k occurrences.

?NearMiss Version 3 is a two-step process.The nearest M-neighbors of each instance of the minority class will be saved first.Finally, the majority class instances with the biggest average distance between N and its nearest neighbors are chosen.

3.4 Feature Extraction Using ZFNet

The Graphic Geometry Group (GGP) [28] launched ZFNet, an updated 05-layer version of CNN.A 7/7 filer and a decreased stride value are utilized in the first layer.The softmax layer of ZFNet is the final one.It is used for feature isolation and propagation learning [29].This post uses ZFNet for feature extraction to display the representation spaces formed by all layer filters in greater detail.All of a layer’s activations are utilized to remove the related features using a deconvolution network.Convolutional and pooling layers are utilized.In the last dense layer, the softmax is used as an activation mechanism.The multi-pooling layers of the ZFNet modules are superior at significant advanced data characteristics.We’ll examine the input image that optimizes the filter’s activation and discover what features each filter catches.Sliding the kernel through the full inputs gives a functional chart in the convolutional technique.The kernel function merges the final output from the convolution layer after numerous feature mapping procedures, namely:

The input in Eq.(4) is m and filter T, also known as the kernel failure, is calculated by multiplying the number of times a certain filter is activated [30], but the input image is random at first.k is the convolutional layer, s is the input data size and d is the convolution result size.Rectified Linear Unit(ReLu) [21] is used as an activation function to introduce non-linearity to the model, as demonstrated in Eq.(5):

A thick layer is used to show the essential features after the dropout layer processes.To avoid over-fitting, the dropout is set at 0.01 and the learning rate is set to 0.001.This approach may be used to activate the final thick layer with softmax, which is specified in Eq.(6) [8] as follows:

If K and S are the functions and weight matrices, respectively, then is determined in Eq.(7) as:

The ZFNet’s hyper-parameters are including learning rate, batch size, quantity of epochs, optimizer, and drop-out rates.These criteria are fundamental for finding the ZFNet module’s optimum results.

3.5 Classification Using AdaBoost Optimized by CHIO and FBIO

For classification, we have used the AdaBoost algorithm, which is optimized by CHIO and FBIO.The details of the algorithm are further explained in subsections.

3.5.1 AdaBoost Algorithm

The AdaBoost algorithm, or Adaptive Boosting, is a boosting approach used as an Ensemble Method in Machine Learning [13].Weights are reassigned to each occurrence, with improperly classified instances receiving larger weights, Ada Boost is the termfor this.Boosting is used to minimize bias and variance in supervised learning.It is centered on the sequential success of learners.

3.5.2 Working of Ada Boost Algorithm

AdaBoost creates n number of decision trees during the data training cycle.When the first decision tree/model is built, the record that was incorrectly labeled in the previous model takes precedence.Only these records are sent as reviews to the second model.The procedure will be repeated until the number of base learners has been determined.Always keep in mind that all boosting methods cause you to reproduce records [15].AdaBoost is a specific training approach for boosted classifiers.A boosted classifier is a type of classifier that works in the form of Eq.(8) [16]:

In Eq.(8), each fr function takes an object c as input and returns a value indicating the object’s c class.In a two-class problem, for example, the sign of bad learner performance defines the predicted object type, but the absolute value represents confidence in that classification.Similarly, if the sample belongs to a positive class, the R-th classifier is positive; otherwise, it is negative.

Each weak learner provides a performance hypothesis, h(ci), for each sample in the training set.Selecting a weak learner and giving it a coefficient alpha at each iteration, r reduces the cumulative training error Wr of the resulting r-stage boost classifier.Each iteration of the training algorithm gives each sample in the training set a weight wi, t equal to the current error W(Fr-1(xi)).The slow learner’s training can be guided by these weights; for example, decision trees that support sorting sets of samples with high weights can be built.A class probability estimation s(c) = S(h=1|c) is the output of decision trees, the certainty that c refers to the positive class.An empirical minimizer derived by Hastie, e-h(Fr-1(c) + fr(p(c))) for a fixed f(c), where:

Whereas, c described the weighted error rate.Rather than increasing the output of the entire tree by a fixed value, each leaf node now produces half of its previous value’s logit transform as shown in Eq.(9) [17].

3.5.3 Forensic Based Optimization

Nguyen and Chou introduced the FBIO approach, which is inspired by police officers’forensic analysis methods.The FBIO is initiated by police officers, who use criminal investigations, arrests and convictions as a tool.The investigative process and the pursuit phase are the two primary stages of the FBIO.The investigators’unit is in charge of the investigative process, while the police officers’team controls the pursuit phase i.e., Non Performing Assets (NPA).During the investigation process, the parameter XAi represents the i-th suspected location to be investigated (i=1, 2, ..., NPA); whereas XBi denotes the i-direction of the police officer, in which the officer continues to pursue the attacker(i=1, 2,..., NPB).The terms NPA and NPB relate to the pursuit squad, which refers to the number of locations and police personnel inspected.In this algorithm, population size (NP) is treated the same as NPA and NPB.The forensic procedure is completed when the total iterations (gmax) are reached.As shown in Fig.5, the FBIO algorithm consists of four steps: analysis of results (A1), course of the investigation (A2), behavior (B1) and extending the phase of actions (B2).The parameter XAl and knowledge about other possible locations were used to make this decision.A new suspected location(Xskl) is deduced in (A1).It is presumed that each person moves as a result of the actions of others.The flowchart of the FBIO method is shown in Fig.2a.

Figure 2: (Continued)

Figure 2: Flowchart of the optimization algorithm

3.5.4 Corona Virus Herd Immunity Optimization

In this article, we have used the CHIO algorithm [18] for the parameter tuning of AdaBoost.CHIO is used to reduce the time complexity and improve the precision of the AdaBoost performance measurement.CHIO was inspired by the idea of herd immunity as a means to tackle a coronavirus disease outbreak (COVID-19).The pace at which coronavirus infection spreads is determined by how infected individuals interact with other members of society.Health authorities recommend social distancing to shield all members of the community from the disorder.Herd immunity is a condition reached by a species when most of the population is immune, preventing disease spread.These ideas are modeled using optimization principles.CHIO is a mix of herd immunity and social distancing strategies.For herd immunity, three forms of human cases are used: susceptible, contaminated and immuned.This is to see if the newly created approach uses social distancing techniques to update the genes.The flow of the CHIO algorithm is shown in Fig.2a.

3.6 Classification with Ensembler

The ETD classification is carried out using the AdaBoost tuned with the CHIO and FBIO.FBIO and CHIO compute the optimal values for the AdaBoost parameters, as illustrated in Fig.3.The optimization algorithms determine the most suited value for the classifier’s parameters, allowing the classifier to perform better.

Figure 3: A visual view of the optimized AdaBoost model

4 Simulation Results and Discussions

The findings of our proposed model’s implementation are described in this section, are explained in terms of their performance metrics.We have simulated our model on system specification core i7,16GB RAM and 4.8GHZ processor.The IDE environment Anaconda (Spyder) and language python are used.Extensive simulations are carried out, which are explained below in Figs.4-6.

In Figs.4a and 4b, the curve of our proposed model is gradually increasing and attaining accuracy.The reason is the optimizers are giving the optimized values to the proposed methods.The accuracy of our proposed model is increasing with the increase in the iterations.As the accuracy of our suggested model is increases, on the other side the loss of our model is also decreasing with the iterations as shown in Figs.5a and 5b, which shows the superiority of our methodology.

Figure 4: Accuracy vs. iteration of ADA-CHIO method

Figure 5: ADA-FBIO and ADA-CHIO method

The effectiveness of our proposed techniques has been assessed with evaluation metrics and error metrics.The evaluation metrics are accuracy, F-score, precision, and recall.Furthermore, the performance error metrics are RMSE, MSE and MAPE.Our proposed techniques outperform the state-of-the-art methods in terms of performance metrics, i.e., highest value and the lowest error rate of MSE, RMSE and MAPE.The formulas of performance evaluation metrics and performance error metrics are governed by Eqs.(10)-(16) [2].

where“Actual”variable describes the real data (on which classifier is trained), whereas the“Predicted”variable is the predicted data.True positive rate is TPT, false positive rate is FPT, false negative values are FNT and false positive value is FPT.

Figs.6a and 6b describe the values of performance error and performance metrics.These figures show that our proposed techniques’error values are low compared to the other techniques.In Fig.6,it is clearly shown that the ADA-CHIO and ADA-FBIO have the highest accuracy of 93% and 90%.Furthermore,the ADA-CHIO and ADA-FBIO have the low MAPEvalue of error,i.e.,6.7%and9.4%,respectively.These values show the sovereignty of our proposed techniques.

Figure 6: Proposed algorithm vs. benchmark algorithms

The lowest MAPE error and maximum accuracy are seen in the ADA-CHIO and ADA-FBIO.Fig.6a depicts the methods’accuracy bar.As indicated in Tab.3, we also conducted a statistical study of the proposed approaches and benchmark techniques.In Tab.3, the general range for a hypothesis is less than 0 andmore than -1.It means when the statistical test value is greater than -1, the hypothesis is correct.If the value is less than 0 it is observed as a false hypothesis.We can see that our proposed model values are greater than -1, which means our hypothesis is correct.

Table 3: Statistical analysis of proposed techniques vs.benchmark algorithms

5 Conclusions

In this article, we present two new algorithms namely: Ada-CHIO and Ada-FBIO to detect energy theft in AMI.It is based on the predictability of natural and malicious consumer consumption behavior.In addition to using the AdaBoost anomaly detector, the proposed algorithm relies on distribution transformer meters to detect NTL at the transformer stage and it uses a base learners scheme in the training model to distinguish the various distributions in the dataset.We have seen that these features give the algorithm a high level of performance and it helps for resistance to nonmalicious improvements in usage patterns and data intrusion attacks.In reality, it is observed that the performance requirements for ETDs can differ by region.However, it is concluded that by adding a delay to the detection algorithm, we can get an adjustment in performance to fit various goals.Simulation results show that the proposed algorithm has a high degree of accuracy/precision i.e., 93% and 90% and a low-performance error rate i.e., 7% and 10% on a real dataset.

The proposed model has maximum reliability, lower performance error and more sensitivity, but it does not guarantee that it will self-learn new patterns in power theft.We can’t say how well our fine-tuned approach will manage numerous types of fraud and large data.This research might be expanded to find distinct patterns of power theft methods as a potential future research topic.To make the research more dependable and precise, researchers can collect additional data samples from real-time SG experts.To find different sorts of thieves patterns in the data, multiclass models may be used.

Funding Statement:The authors acknowledge the support from the Ministry of Education and the Deanship of Scientific Research, Najran University.Kingdom of Saudi Arabia, under Code Number NU/-/SERC/10/588.

Conflicts of Interest:The authors declare that they have no conflicts of interest to report regarding the present study.

主站蜘蛛池模板: 欧美午夜久久| 91蜜芽尤物福利在线观看| 国产精品网拍在线| 韩国v欧美v亚洲v日本v| 国产欧美性爱网| 久久久四虎成人永久免费网站| 国产麻豆精品手机在线观看| 99性视频| 无遮挡一级毛片呦女视频| 97在线碰| 不卡无码网| 国产人人乐人人爱| 亚洲另类国产欧美一区二区| 欧美日韩激情在线| jizz在线观看| 毛片免费在线视频| 欧美日一级片| 亚洲国产成人在线| 国产凹凸视频在线观看| 国产成人亚洲无吗淙合青草| 国产成+人+综合+亚洲欧美| 久久精品娱乐亚洲领先| 激情乱人伦| 亚州AV秘 一区二区三区| 精品福利网| 国产亚洲欧美在线人成aaaa| 国产自产视频一区二区三区| 亚洲毛片在线看| 91视频精品| 午夜高清国产拍精品| 色综合国产| 成·人免费午夜无码视频在线观看 | 欧美综合成人| 欧美a网站| 色婷婷成人| 99久久无色码中文字幕| 精品中文字幕一区在线| 精品久久久久成人码免费动漫 | 国产性精品| 99热这里只有精品免费| 久久久久久午夜精品| 9啪在线视频| 狠狠色综合久久狠狠色综合| 中文天堂在线视频| 精品夜恋影院亚洲欧洲| 亚洲国模精品一区| 亚洲日韩欧美在线观看| 日韩国产精品无码一区二区三区| 亚洲无码高清视频在线观看| 国产主播在线观看| 国产小视频a在线观看| 久久夜色精品国产嚕嚕亚洲av| 国产成人久久综合一区| 最新日韩AV网址在线观看| 91欧美在线| 久久综合五月婷婷| 国产成人免费观看在线视频| 中国精品自拍| 亚洲色图欧美在线| 99热这里只有精品免费国产| 国产亚洲欧美在线专区| 美女扒开下面流白浆在线试听| 国产区在线观看视频| 久久婷婷人人澡人人爱91| 人妻无码中文字幕一区二区三区| 黄色福利在线| 欧美精品高清| 日韩中文无码av超清| 国产欧美日韩综合在线第一| 久久综合色88| 日韩123欧美字幕| 人禽伦免费交视频网页播放| 国内精品九九久久久精品| 九色国产在线| a亚洲视频| av一区二区三区在线观看| 国产不卡网| 国产黄网永久免费| 中文字幕色在线| 亚洲中文字幕手机在线第一页| 日韩无码视频播放| 狠狠色婷婷丁香综合久久韩国 |