Skip to main content

Parameters , Call the Subroutine , Adding More Arguments

 

Parameters


In a subroutine, the () are for the argument (FYI argument is another word for parameter). These are the pieces of information we pass to the code. These can be variable names that are made up for the first time within the argument ().


EXAMPLE :

def whichCake(ingredient):
  if ingredient == "chocolate":
    print("Mmm, chocolate cake is amazing")
  elif ingredient == "broccoli":
    print("You what mate?!")
  else: 
    print("Yeah, that's great I suppose...")


How do we call the subroutine?


We call it in the same way as before. However, instead of leaving the () blank, we send the code a message.
 



EXAMPLE :

def whichCake("chocolate"):
  if ingredient == "chocolate":
    print("Mmm, chocolate cake is amazing")
  elif ingredient == "broccoli":
    print("You what mate?!")
  else: 
    print("Yeah, that's great I suppose...") 

Adding More Arguments



We can have as many arguments as we want, separated by commas. 


EXAMPLE :

def whichCake(ingredient, base, coating):
  if ingredient == "chocolate":
    print("Mmm, chocolate cake is amazing")
  elif ingredient == "broccoli":
    print("You what mate?!")
  else: 
    print("Yeah, that's great I suppose...")
  print("So you want a", ingredient, "cake on a", base, "base with", coating, "on top?")

whichCake("carrot", "biscuit", "icing")



Comments

Popular posts from this blog

HTTP & Sessions

 HTTP & Sessions One of the main protocols (rules that govern how computers communicate) on the web is called HTTP. HTTP is what is known as a stateless protocol. This means that it doesn't 'remember' things. It's a bit like having a conversation with a goldfish. You can ask a question and get a reply, but when you ask a follow up question, the original has already been forgotten, as has who you are and what you were talking about. So if HTTP is stateless, how come my news site remembers to give me the weather for my home town, my preferred South American river based online store tells me when it's time to order more multivitamins, and I'm justifiably proud of my #100days success streak? The answer is......... Sessions Sessions are a way of storing files on your computer that allows a website to keep a record of previous 'conversations' and 'questions' you've asked. By using sessions, we can store this info about the user to access later....

Automate! Automate!

 Making this customizable ๐Ÿ‘‰So how about making our search user customizable? In the code below, I have: Asked the user to input an artist (line 14) Tidied up their input (line 15) formatted the search URL as an fString that includes the artist (line 19) Here's tAutomate! Automate! We are so close. I can taste it, folks! Massive kudos on getting this far! Today's lesson, however, will work best if you have one of Replit's paid for features (hacker plan or cycles). Free plan Repls 'fall asleep' after a while. Automation kinda relies on the Repl being always on. If you have hacker plan or you've bought some cycles, then you can enable always on in the drop down menu that appears when you click your Repl name (top left).he code: This is important because when our repl is always running, it can keep track of time and schedule events. ๐Ÿ‘‰ I've set up a simple schedule that prints out a clock emoji every couple of seconds. It works like this: Import schedule librar...

Web Scraping

 Web Scraping Some websites don't have lovely APIs for us to interface with. If we want data from these pages, we have to use a tecnique called scraping. This means downloading the whole webpage and poking at it until we can find the information we want. You're going to use scraping to get the top ten restaurants near you. Get started ๐Ÿ‘‰ Go to a website like Yelp and search for the top 10 reastaurants in your location. Copy the URL.   url = "https://www.yelp.co.uk/search?find_desc=Restaurants&find_loc=San+Francisco%2C+CA%2C+United+States"   Import libraries ๐Ÿ‘‰ Import your libraries. Beautiful soup is a specialist library for extracting the contents of HTML and helping us parse them. Run the Repl once your imports are sorted because we want the Beautiful Soup library to be installed (it'll run quicker this way). import requests from bs4 import BeautifulSoup url = "https://www.yelp.co.uk/search?find_desc=Restaurants&find_loc=San+Francisco%2C+CA%2C+Unite...