Skip to main content

Authentication

 Authentication

Efficient code often has the drawback of being very hard to understand at first. It's often very dense, with lots of things happening on a single line of code.


That's why teachers often teach what could be described as 'the long way round' when designing lessons on new topics.


With all that in mind, don't be mad at us when we say that there is an easier way to create a login system than by using sessions. You've spent the past few days getting a really thorough grounding in what's going on behind the scenes, which was the whole point. No, really. We promise.


Replit Authentication

Here at Replit, we know that you will probably be using authentication a lot. So we've baked in the feature for you.


๐Ÿ‘‰ Run your code, then head over to your left hand files pane and scroll until you see authentication. Then, erm, turn it on. That's it.




Now you will see that your repl uses the default Replit login page.




I can also access a bunch of information about the user stored in the authentication panel.


๐Ÿ‘‰ To do this I'm going to import request, and then use username = request.headers["X-Replit-User-Name"] to assign the username to a variable. I got the X-Replit-User-Name code from the authentication panel.


Here's the full code:


from flask import Flask, request


app = Flask(__name__)


@app.route('/')

def index():

  username = request.headers["X-Replit-User-Name"]

  return f"Hello {username}"


app.run(host='0.0.0.0', port=81)


Comments

Popular posts from this blog

HTML , Tags , Body , Headings , Paragraphs , Images , Bullets , Linky ,

 Hyper Text Markup Language Over the next couple of days, we'll be taking a crash course in HTML (Hyper Text Markup Language). HTML is a markdown language. This means that it is used to tell webpages how to render on screen (basically how to look). It is made up of a series of instructions in <tags> that surround text/image filenames, etc. and influence how they are displayed on screen.    Tags Now let's start creating a webpage and learning about the tags. ๐Ÿ‘‰ Step 1 is to tell the file that this is an HTML page. These are the first and last tags on your page. Notice that the last tag has a forward slash before the command. This means close or end this tag. With a few exceptions, tags come in pairs - an opening tag (no /) and a closing tag (with a /).  <html>    </html>  Head The <head> tags contain a lot of invisible information about the page that you won't see on screen. Stuff like: How to display your webpage on different de...

Automate! Automate!

 Making this customizable ๐Ÿ‘‰So how about making our search user customizable? In the code below, I have: Asked the user to input an artist (line 14) Tidied up their input (line 15) formatted the search URL as an fString that includes the artist (line 19) Here's tAutomate! Automate! We are so close. I can taste it, folks! Massive kudos on getting this far! Today's lesson, however, will work best if you have one of Replit's paid for features (hacker plan or cycles). Free plan Repls 'fall asleep' after a while. Automation kinda relies on the Repl being always on. If you have hacker plan or you've bought some cycles, then you can enable always on in the drop down menu that appears when you click your Repl name (top left).he code: This is important because when our repl is always running, it can keep track of time and schedule events. ๐Ÿ‘‰ I've set up a simple schedule that prints out a clock emoji every couple of seconds. It works like this: Import schedule librar...

Web Scraping

 Web Scraping Some websites don't have lovely APIs for us to interface with. If we want data from these pages, we have to use a tecnique called scraping. This means downloading the whole webpage and poking at it until we can find the information we want. You're going to use scraping to get the top ten restaurants near you. Get started ๐Ÿ‘‰ Go to a website like Yelp and search for the top 10 reastaurants in your location. Copy the URL.   url = "https://www.yelp.co.uk/search?find_desc=Restaurants&find_loc=San+Francisco%2C+CA%2C+United+States"   Import libraries ๐Ÿ‘‰ Import your libraries. Beautiful soup is a specialist library for extracting the contents of HTML and helping us parse them. Run the Repl once your imports are sorted because we want the Beautiful Soup library to be installed (it'll run quicker this way). import requests from bs4 import BeautifulSoup url = "https://www.yelp.co.uk/search?find_desc=Restaurants&find_loc=San+Francisco%2C+CA%2C+Unite...