Sunday 26 October 2014

Proofs go Poof

This week's topics included a conclusion of proofs and the beginning of algorithms.
Proof by cases, the last method of proofs we learnt, involves  breaking the problem down into different cases or situations and then solving for each case, ultimately bringing the problem back circle and joining up the conclusions. We wrapped up proofs with a summary of introduction and elimination rules for the basic proof conditions.

Then we started on the more...computer science side of things. Chapter 4, "Algorithm Analysis and Asymptotic Notation", which is basically about the classification of algorithms by their big O notation. It is about how efficiently an algorithm(a set of problem solving steps) works. We started off with bubble and merge sort, comparing the difference between their run times given the same set of data for both sorts. In computer science the point to be noted is that run time isn't actually about the seconds that the program takes to complete its job. Rather, it is about the number of steps that the program has to run and how that number changes with respect to the actual code of the algorithm and the kind of input or data it has to work with. This makes it independent of the hardware it is being run on.

Another thing to note is that constants do not matter in determining run time. 2n and 2000n still have the same linear runtime. Furthermore, when it comes to large data sets, the lower order terms become irrelevant for the run time. Something like n2 + 2n + 3, is pretty much the same as n2 because the 2n + 3 is tiny in size compared to the n2 which makes it have little no effect on the run time.

The asymptotic notation to depict run times is O(f(n)), whereas the exact form is actually the function f(n ) in its entirety. This brought us to the end of the class with a brief look on worst, average, and best case scenarios for an algorithm, with worst case meaning under what data conditions does the algorithm have the longest run time and best case meaning under what data conditions it has the best run time. 

Saturday 18 October 2014

Marks and Proofs

Getting our tests back so quickly came as a bit of a shock this week. My mark was gave me a bigger shock.  Messing up on a topic you thought you understood(which in my case was the order of the quantifiers and how the meaning of the statement changes depending on the order)  is a bit depressing.

Sad test marks aside, the shorter than usual lecture this week covered actually doing proofs rather than just their structures. We started off with non boolean function proofs and a relatively new concept of the 'floor' of a function. I understood the proofs when they were done in class, but my problem is actually doing them by myself. Understanding something that's already there is much easier than actually creating something based on the understanding you already have of it. So the "takeaway" slides at the end of each of each section of the lecture helped me get a better viewpoint on things such as working with expressions to make them look more like the ones in the question itself.

Then came disproving a statement,  which is basically the same as proving its negation. What I found a bit tricky here was visualizing the examples because not all functions or statements are as straightforward or easy to visualize as the ones we do in lecture.
Lastly, we did epsilon-delta proofs of limits. Again this is something we're doing in calculus too, so its just a tiny bit easier to understand. But the backward search way of finding the delta is something that I still have to work on.

All in all I think proofs are something that give a lot of people trouble but hopefully we all will pull through this and survive. 

Sunday 12 October 2014

Tried and Tested


One test done! And it wasn't even that bad! Looking at the tests for the past years had caused me a bit of worry seeing as it had things that we had not covered in class yet, but turns out there was no need to worry at all. Everything on the test was related to topics that I was sure were going to be on the it and there were no extra complicated or trick questions(unless there were and I didn't catch them).  

In the lecture that we (sadly) had after the test, we talked about 4 ways of doing a proof:
  • proof using contrapositive - should be used when the contrapositive is easier to prove than the statement itself.
  • proof using contradiction - prove the opposite of the statement right.
  • proof for existence - find just one example that proves the statement.
  • proof about a sequence - break the sequence down into parts based on the quantifiers in the statement.

One thing to remember(which I think I will forget) is to write those comments to explain what is going on in your proof. It's pretty much the same as commenting your code.

Proofs are something that I take time to understand completely. The tutorial proofs and quiz helped a lot to understand the structure by breaking it down, but the process is still not something that I am fully confident about. Hopefully, next week when we do more proofs, it will all become clearer. 

Friday 3 October 2014

Problem solving episode

So at the end of the last lecture we worked on figuring out if folding a piece of paper over and over again in the same way produces a pattern that is mathematically predictable or not. This is how I broke the problem down according to Poyla's method of problem solving.

1.  We know that we must take a strip of paper and fold its left end over to its right end several times to get a creased pattern on the strip when it is opened up. Each crease thus created is either pointing vertex up or vertex down.  The goal of the problem is to determine whether one can logically predict which way the creases will face(up or down) regardless of the number of times the paper is folded over.

2. The plan is to first fold the paper different amount of times and note the pattern of creases created for different amount of folds. Then identify the similarities and differences in a series of patterns in order to predict the direction of the creases after each fold.

3. Folding the paper once gives:                                    v
Folding it twice gives:                                                   ^ v v
Folding it thrice gives:                                            ^ ^ v v ^ v v
Folding it four times gives:                          ^ ^ v ^ ^ v v v ^ ^ v v ^ v v
and so on.

4. Looking at the pattern created after each fold we can see how it follows from the previous fold. For example, after the third fold, the paper has the same creases as after the second fold, and then some more. To make it clearer we depict it like this:







v










^



v



v




^

^

v

v

^

v

v

^
^
v
^
^
v
v
v
^
^
v
v
^
v
v

After putting it like this one can see that once a crease is made after a fold, it is (linearly) preceded by a vertex up crease and followed by a vertex down crease in the next level of folding. Also, only a new crease (i.e. a crease that was not existent on the previous level of folding) is preceded and followed in that manner. An old crease(one that was created before the previous level of folding) will just stay the same with no additional creases around it except for those that are created as a result of the other new creases.

5. You will get stuck in solving this problem if you jump straight into it with no plan of action. It takes a while to wrap your brain around the fact that the creases are not in fact random and do depend on the way you fold them, but once you try folding the paper a few different amount of times, you will see the pattern. Writing out what you see really helps in understanding what's happening.