CrickView




Introduction:

I decided to design a platform that provides a more practical and aesthetically pleasing approach to analyse a player's performance data in response to the difficulties of pulling information from Wikipedia in the modern era.

Users may now quickly access and understand the important facts and trends related to a player's performance by utilising the power of data visualisation through graphs. By making it simpler for users to obtain and understand the information they need, this platform attempts to streamline the process of gathering insights.

During the development of CrickView, the following major libraries were investigated and utilized:
  • Streamlit
  • Pandas
  • RE
  • Requests
  • bs4(BeautifulSoup)
  • Time
  • Plotly
  • urllib
  • Numpy
  • Flow of Analysis:

    I created a smooth pipeline connecting my website to Wikipedia using BeautifulSoup. I focused on a particular piece of data within this pipeline that was vital to the operation of my website. Using BeautifulSoup's parsing features, I was able to properly retrieve the needed data.

    I took the deliberate decision to analyse and display the data directly on my website rather than choosing to store it. This choice resulted in longer processing times even though it required less effort to store the data. Nevertheless, I prioritised real-time accessibility and made sure that people could see and engage with the pertinent information right away by dynamically presenting the extracted data on my website. A faster and more engaging user experience was made possible on my website by the trade-off between storage and processing time.

    Reasons for using various libraries involve:

  • urljoin : I used urljoin to create the full URL by fusing a base URL and a relative URL in order to acquire the precise image I needed.
  • plotly : I used plotly to create and display graphs in order to convey data in an interactive and visually appealing way. Using the library's graphing tools, I was able to demonstrate trends and conclusions drawn from the data that was retrieved.
  • requests : To communicate with other sources, such as fetching responses from URLs, I relied on the requests library. As a result, it was easier for me to incorporate dynamic and current information on my website by facilitating the extraction of data from websites and APIs.
  • re : Regular expressions were used to quickly extract and manipulate particular expressions or patterns from the extracted data using the re library. This made managing and processing textual data more adaptable and precise.
  • Streamlit : I decided to display and deploy my work using streamlit as the deployment framework. By providing a simple method for creating dynamic web apps, Streamlit enables me to seamlessly present the processed data and user interface elements.

  • Disclaimer: Please note that not all player photos may be displayed on the website. This limitation arises from utilizing URL links to retrieve and showcase player images. If an image is not available or accessible through the provided link, it will not be displayed on the website.

     Links: