Author: Varshini Siddavatam
Sri Chaitanya Junior College
August 1, 2021
It is the purpose of this paper to investigate the impact of missing values on commonly encountered data analysis problems. The ability to more effectively identify patterns in socio-demographic longitudinal data is critical in a wide range of social science settings, including academia. Because of the categorical and multidimensional nature of the data, as well as the contamination caused by missing and inconsistent values, it is difficult to perform fundamental analytical operations such as clustering, which groups data based on similarity patterns. Companies can suffer significant financial losses as a result of inaccurate data. Poor-quality data is frequently cited as the root cause of operational snafus, inaccurate analytics, and poorly thought-out business strategies, among other things. Examples of the economic harm that data quality problems can cause include increased costs when products are shipped to the wrong customer addresses, lost sales opportunities as a result of inaccurate or incomplete customer records, and fines for failing to comply with financial or regulatory reporting requirements. Processes such as data cleansing, also known as data scrubbing, are used to correct data errors, as well as work to enhance data sets by including missing values, more up-to-date information, or additional records, among other things. Afterwards, the results are monitored and measured in relation to the performance objectives, and any remaining deficiencies in data quality serve as a starting point for the next round of planned improvements. It is the goal of such a cycle to ensure that efforts to improve overall data quality continue after individual projects are finished.
A. Background Information
Data quality is a process of measuring the context of data depending on several factors such as consistency, accuracy, reliability, completeness and whether it is contemporary. The professionals have to deal with several missing and corrupted data in their regular work. In order to make data more concrete and flexible, it is highly significant to identify the data quality and data errors. Missing data is similar to the missing values of any important document or information of a whole unit. In case of missing informative data, no information will be provided to the required criteria.
Especially in this recent decade, within constant increasing online data storage the issue regarding corrupt data is rapidly growing. People nowadays provide their maximum personal information in social networking sites or online sites and the majority of the working procedures are happening depending on the online networks. Based on the daily information of the missing data, the reported rate is 15% to 20% (nih.gov, 2021). Accompanied with this approach it is highly significant to maintain the quality of data.
B. Thesis Statement
In this study researchers have focused on the importance of analyzing quality of data in relation to missing and corrupted data. The thesis statement of the research is that missing and corrupted data can be maintained through effective solutions that can improve the quality of overall data. Along with this, improving storage capacity of the data collection process can protect all the valuable data from being corrupted or missing.
Accompanied with better knowledge and skills the operating process of data protection can be utilized in a far better way to secure all the important documents that are uploaded in the various online sites. Maintaining good quality data that will not be easy to imitate or steal also will be identified as a preventer of corrupting data. It also can be stated after analysing the study regarding missing data that overall the world currently the cases of missing data has increased a lot. If the prevention process gets proper governmental support in this criterion, this process will be better understood by everyone.
A. Support Paragraph 1
Due to being unable to handle missing and corrupted data can have a negative effect over an individual work process.
In order to handle missing and corrupted data the operators can calculate the cluster value in the column and put the obtained number to the empty spot. As opined by Hao et al. (2018), following the sudden outage of power can save the data from being corrupted. Several times system crashes are considered as another issue of inability to protect data. As stated by Gudivada et al. (2017), in case a PC hard disk gets filled with junk files, the data corruption process gets enhanced. Restoring previous versions in the main storage can help in saving data corruption. In addition, updating the computer process system on a daily basis can help operators to handle their important data. As observed by Azeroual & Schöpfel (2019), the DISM tool is an effective strategy to modify and repair system images by administrators and developers under the category of computer science. Due to recovering corrupted files, the hard disk command is recognized as another key factor that is able to repair missing data.
According to the reports, the frauds based on internet stock have earned millions of amounts per year. Among the total amount of missing data, the maximum quantity is not able to be repaired. As stated by Owusu et al. (2019), the factor of missing data is concerning for the aged people who have a very tiny knowledge regarding the technologies and online procedures. Since nowadays the maximum work process is done through online networking sites, it is really a risk factor to secure the valuable data from the eyes of hackers. The aged people become easily manipulated by the cyber frauds phishing calls and share their personal details. Accompanied with advanced and modern technology several hackers continue to hack others important data easily. If any valuable data is hacked or missed or corrupted, it can be utilized to lead any kind of criminal activities.
In order to secure various types of activities there required a proper approach to protect data properly. Missing or corrupting data not only affect the work procedure in an individual organization but also harm any individual by personal information. As proposed by Morganstein & Ursano (2020), due to working while staying far from the sectors it creates difficulties for the employees under the data security provider system. It is also identified as a major issue regarding corrupted data. In many cases it also can be found that not having proper knowledge and skills, employees remain not capable to protecting data from bemg missing. Data always remains important and significant to prove anything at an initial stage. The principle of “missing data methods” does not place a missing value slightly as they merge available information from the monitoring data with idiomatic supposition.
In case of missing any vital data or information also affects the research process and creates obstacles for the researchers. Especially in the corporate or private working sectors the entire work procedures are happening through internet based networking sites, the majority of data missing cases are found here. As per the view of Pan & Chen (2018), operating online sites are delivering new advantages for the cyber frauds along with hackers to implement offense. All the staff in a corporate sector is not capable of handling data secure processes, so in case of missing data they face a lot of issues in their work system. This affects negatively to lead the work process smoothly and perfectly and consequently it can increase the trust issue.
Adopting several strategic plans the administrators and developers under the category of computer science can recover missing and corrupted data. Apart from this, adopting proper knowledge and skill regarding data protection activity also can help to reduce the effect of missing data. Corrupted data not only affect the work process in the corporate world but also harm the customer trust factors. As nowadays the maximum work process is done through online networking sites, it is really a risk factor to secure the valuable data from the eyes of hackers. In this recent era, not having proper knowledge regarding data security there leads to a serious issue especially in the working system.
B. Support Paragraph 2
Being able to manage data quality analysis can recover missing and corrupted data that have a positive effect over an individual work process.
As poor-quality data often make limitations in the work process, it is important to adopt data quality analysis to have the ability to save work performance. As stated by Wahyudi et al. (2018), to make a more active operating system the quality of data can be maintained by the developers. As per the view of Uthayakumar et al. (2018), top quality databases can bring migration consideration for an individual work process. In this segment, estimating and implementing a data recovery warehouse is able to meet the need of work culture. As opined by Cappiello et al. (2018), within awareness regarding quality management helps in making an effective work process. Maintaining the use of good quality data helps to improve the decision-making process to make the work more authentic.
As in any work procedure data collection method and collected data both are equally important and have a vital role to precede the entire procedure. In order to protect the data there required a proper skill regarding handling the information and making them placed in a secure storage. As opined by Triguero et al. (2019), adoption of adequate data policy also can help in protecting valuable data for a long-term issue. While transforming big sized data, the majority areas cause corruption and missing data. Since big data is heavy to load and transfer with a minimum time, it requires a proper framework that can be helpful to support this approach. Along with this, focusing on the making process of data storage is also capable of securing informative data more protectively.
This approach especially helps the employees who are working in any corporate organization. Holding data properly is a significant requirement in a workplace, as it is related to the success procedure of the organization. According to Benzeval et al. (2020), based on the data the work process has to be done in any organization and it is able to predict whether the profit can be possible or not. Missing data and corruption of data is a random process that happens when the system is filled and overloaded. In this scenario, having computer knowledge can prevent large size loss and make it a little easier to handle. Utilizing good quality databases has the capability to retain important data for a long time to be used. Therefore, constant experiments regarding data quality analysis can assist the entire process to be more active to protect data from being corrupted.
The value of data can be held by adopting effective technologies that are capable of delivering extra security systems that could not be lost. Though, the factor of data analysis needs to be more efficient so that any kind of error can be noticed to prevent the risk issues. In the words of Broeders et al. (2017), focusing on the data quality has the ability to secure the information and reduce the risk factors. Accompanied with the recent pace, it is highly crucial to invent new strategies and technologies in the workplace to bring innovation while maintaining data quality. Understanding the requirement of data analysis also can help in managing a proper strategy to manage the corrupted data.
A top-quality data can mitigate the lack of trust and provide reliable resources for finishing any work segment. Based on the data analysis the process of any individual work has to be done in any organization and it is able to predict whether the profit can be possible or not. While transforming big sized data, the majority areas cause corruption and missing data. Adopting advanced and modern security systems can handle the big size data and secure them from being corrupted. Due to fulfilling all the criteria discussed in the above section, it is highly required to follow a proper data analysis method so that the potential risk factors can be highlighted or marked to be fixed again.
Identifying the data quality can ensure whether the work process will be beneficial or not. The entire structure of the data analysis method needs to be more active to recover missing and corrupted data. Maintaining proper rules and regulations also can help to control top data collection methods to avoid data errors. Accompanied with advanced and modern technology several hackers continue to hack others important data easily. Preventing them from all types of offences the organizations need to adopt a more effective and active data security system to retain for a longterm issue. In many cases, it can be seen that not having proper knowledge regarding data security there leads to a serious issue especially in the working system. Maintaining good quality data that will not be easy to imitate or steal also will be identified as a preventer of corrupting data. In addition, having computer knowledge can prevent large size loss and make it a little easier to handle.
Depending on the entire study it can be concluded that monitoring improvement results is capable of managing data quality. High quality data is always considered as helpful in order to meet inaccurate data needs to work with good and valid information. Accompanied with better knowledge and skills the operating process of data protection can be utilized in a far better way to secure all the important documents that are uploaded in the various online sites. Due to leading the work process in any organization there is highly required a proper framework to analyze the collected data in order to identify the potential risks or errors to prevent it from very early stage.
Azeroual, O., & Schöpfel, J. (2019). Quality issues of CRIS data: An exploratory investigation with universities from twelve countries. Publications, 7(1), 14. Retrieved From: https://www.mdpi.com/416282
Benzeval, M., Bollinger, C., Burton, J., Couper, M. P., Crossley, T. F., & Jäckle, A. (2020). Integrated data: research potential and data quality. Understanding Society Working Paper Series, (2020-02). Retrieved From: https://www.understandingsociety.ac.uk/sites/default/files/downloads/working-papers/2020-02.pdf
Broeders, D., Schrijvers, E., van der Sloot, B., van Brakel, R., de Hoog, J., & Ballin, E. H. (2017). Big Data and security policies: Towards a framework for regulating the phases of analytics and use of Big Data. Computer Law & Security Review, 33(3), 309-323. Retrieved From: https://www.sciencedirect.com/science/article/pii/S0267364917300675
Cappiello, C., Samá, W., & Vitali, M. (2018, June). Quality awareness for a successful big data exploitation. In Proceedings of the 22nd International Database Engineering & Applications Symposium (pp. 37-44). Retrieved From: https://dl.acm.org/doi/abs/10.1145/3216122.3216124
Gudivada, V., Apon, A., & Ding, J. (2017). Data quality considerations for big data and machine learning: Going beyond data cleaning and transformations. International Journal on Advances in Software, 10(1), 1-20. Retrieved From: https://www.researchgate.net/profile/Junhua-Ding/publication/318432363_Data_Quality_Considerations_for_Big_Data_and_Machine_Learning _Going_Beyond_Data_Cleaning_and_Transformations/links/59ded28b0f7e9bcfab244bdf/Data-Quality-Considerations-for-Big-Data-and-Machine-Learning-Going-Beyond-Data-Cleaning-and-Transformations.pdf
Hao, Y., Wang, M., Chow, J. H., Farantatos, E., & Patel, M. (2018). Modelless data quality improvement of streaming synchrophasor measurements by exploiting the low-rank Hankel structure. IEEE Transactions on Power Systems, 33(6), 6966-6977. Retrieved From: https://ieeexplore.ieee.org/abstract/document/8395403/
Morganstein, J. C., & Ursano, R. J. (2020). Ecological disasters and mental health: causes, consequences, and interventions. Frontiers in psychiatry, 11, 1. Retrieved From: https://www.frontiersin.org/articles/10.3389/fpsyt.2020.00001/full
nih.gov, 2021. The prevention and handling of the missing data [Online]. Available at: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3668100/ [Accessed on 27 July, 2021]
Owusu, E. K., Chan, A. P., & Shan, M. (2019). Causal factors of corruption in construction project management: An overview. Science and engineering ethics, 25(1), 1-31. Retrieved From: https://link.springer.com/content/pdf/10.1007/s11948-017-0002-4.pdf
Pan, J., & Chen, K. (2018). Concealing corruption: How Chinese officials distort upward reporting of online grievances. American Political Science Review, 112(3), 602-620. Retrieved From: https://www.cambridge.org/core/journals/american-political-science-review/article/concealing-corruption-how-chinese-officials-distort-upward-reporting-of-online-grievances/43D20A0E5F63498BB730537B7012E47B
Triguero, I., García‐Gil, D., Maillo, J., Luengo, J., García, S., & Herrera, F. (2019). Transforming big data into smart data: An insight on the use of the k‐nearest neighbors algorithm to obtain quality data. Wiley Interdisciplinary Reviews: Data Mining and Knowledge Discovery, 9(2), e1289. Retrieved From: https://wires.onlinelibrary.wiley.com/doi/abs/10.1002/widm.1289
Uthayakumar, J., Vengattaraman, T., & Dhavachelvan, P. (2018). A survey on data compression techniques: From the perspective of data quality, coding schemes, data type and applications. Journal of King Saud University-Computer and Information Sciences. Retrieved From: https://www.sciencedirect.com/science/article/pii/S1319157818301101
Wahyudi, A., Kuk, G., & Janssen, M. (2018). A process pattern model for tackling and improving big data quality. Information Systems Frontiers, 20(3), 457-469. Retrieved From: https://link.springer.com/article/10.1007/s10796-017-9822-7