Implementation of Test Automation with Selenium
Webdriver
Aneesha Gudavalli
1
and G.JayaLakshmi
2
1
M.Tech Scholar, Department of Information Technology, V R Siddhartha Engineering College, Vijayawada,
Andhra Pradesh, India
Email: gudavallianeesha@gmail.com
2
Assistant Professor, Department of Information Technology, V R Siddhartha Engineering CollegeVijayawada,
Andhra Pradesh, India
Email: jaya1123@vrsiddahrtha.ac.in
Abstract—As a result, a wide variety of methods are used to collect data that counts. All the
data collection methods, web scraping is the most reliable and efficient one. Popular test
automation framework Selenium can be used to scrape web pages in a variety of ways. In this
paper, we demonstrate web page scraping using Python, a popular backend programming
language. Test automation frameworks such as Selenium also support it. Information can also
be scraped from static web pages. As a result, this paper examines the major advantages and
challenges of web scraping in building web applications. The framework is fully functional and
can be used by both novice and advanced users to automate the testing of web sites.
Index Terms— HTML, Python ,JavaScript,webdriver.
I. INTRODUCTION
Anywhere on the internet, information is one of the most valuable resources.To function properly, several web
services and applications necessitate large amounts of data.Web search engines (Google, Duckduckgo), product
value and feature similarity sites (versus.com, pricealarm.net) or inner tools used by several companies for
market and entrant analysis are examples of such applications that are available. Two applications will be created
to better understand this process.
One of the applications uses Web Scraping as a back-end service and compares three popular online electronics
stores in real-time.Web Scraping provides the data for the subsequent application, which is a simplified edition
of Google's web search engine based on daily Web Scraping. The principle of web scraping is to mine a large
extent of data and hoard it to a local setting [1]. In the untimely days, the only way to mine data from websites
was by copy-pasting what one axiom on the website. Web scraping is a popular technique as it allows innovative
startups to quickly attain large amounts of facts. Most emblematic examples of web scraping are value
comparison as well as assessment websites. Big firms also use this technique to grade and file web pages.
In order to scrape a website, you must crawl and parse it[1].In accordance with the developers implementation of
crawlers, crawlers visit the URL, download the page content, and follow links within the content. Data is
extracted by parsing downloaded content. You must know the basic components of web applications as well as
the website rendering process in order to implement parsers and crawlers.
Grenze ID: 01.GIJET.8.1.8_1
© Grenze Scientific Society, 2022
Grenze International Journal of Engineering and Technology, Jan Issue