About Python

Python is an interpreted, object-oriented, high-level programming language with dynamic semantics. Its high-level built in data structures, combined with dynamic typing and dynamic binding, make it very attractive for Rapid Application Development, as well as for use as a scripting or glue language to connect existing components or services. Python supports modules and packages, thereby encouraging program modularity and code reuse.

About this article

Python’s simple, easy-to-learn syntax can mislead Python developers – especially those who are newer to the language – into missing some of its subtleties and underestimating the power of the language.

With that in mind, this article presents a “top 10” list of somewhat subtle, harder-to-catch mistakes that can bite even the most advanced Python developer in the rear.

 Recommended Read: Common Python Mistakes

(Note: This article is intended for a more advanced audience than Common Mistakes of Python Programmers, which is geared more toward those who are newer to the language.)

Common Mistake #1: Misusing expressions as defaults for function arguments

Python allows you to specify that a function argument is optional by providing a default value for it. While this is a great feature of the language, it can lead to some confusion when the default value is mutable. For example, consider this Python function definition:

>>> def foo(bar=[]):        # bar is optional and defaults to [] if not specified
...    bar.append("baz")    # but this line could be problematic, as we'll see...
...    return bar

A common mistake is to think that the optional argument will be set to the specified default expression each time the function is called without supplying a value for the optional argument. In the above code, for example, one might expect that calling foo() repeatedly (i.e., without specifying a bar argument) would always return 'baz', since the assumption would be that each time foo() is called (without a bar argument specified) bar is set to [] (i.e., a new empty list).

But let’s look at what actually happens when you do this:

>>> foo()
["baz"]
>>> foo()
["baz", "baz"]
>>> foo()
["baz", "baz", "baz"]

Huh? Why did it keep appending the default value of "baz" to an existing list each time foo() was called, rather than creating a new list each time?

The answer is that the default value for a function argument is only evaluated once, at the time that the function is defined. Thus, the bar argument is initialized to its default (i.e., an empty list) only when foo() is first defined, but then calls to foo() (i.e., without a bar argument specified) will continue to use the same list to which bar was originally initialized.

FYI, a common workaround for this is as follows:

>>> def foo(bar=None):
...    if bar is None:          # or if not bar:
...        bar = []
...    bar.append("baz")
...    return bar
...
>>> foo()
["baz"]
>>> foo()
["baz"]
>>> foo()
["baz"]

Common Mistake #2: Using class variables incorrectly

Consider the following example:

>>> class A(object):
...     x = 1
...
>>> class B(A):
...     pass
...
>>> class C(A):
...     pass
...
>>> print A.x, B.x, C.x
1 1 1

Makes sense.

>>> B.x = 2
>>> print A.x, B.x, C.x
1 2 1

Yup, again as expected.

>>> A.x = 3
>>> print A.x, B.x, C.x
3 2 3

What the $%#!&?? We only changed A.x. Why did C.x change too?

In Python, class variables are internally handled as dictionaries and follow what is often referred to as Method Resolution Order (MRO). So in the above code, since the attribute x is not found in class C, it will be looked up in its base classes (only A in the above example, although Python supports multiple inheritance). In other words, C doesn’t have its own x property, independent of A. Thus, references to C.x are in fact references to A.x.

Common Mistake #3: Specifying parameters incorrectly for an exception block

Suppose you have the following code:

>>> try:
...     l = ["a", "b"]
...     int(l[2])
... except ValueError, IndexError:  # To catch both exceptions, right?
...     pass
...
Traceback (most recent call last):
  File "<stdin>", line 3, in <module>
IndexError: list index out of range

The problem here is that the except statement does not take a list of exceptions specified in this manner. Rather, In Python 2.x, the syntax except Exception, e is used to bind the exception to the optional second parameter specified (in this case e), in order to make it available for further inspection. As a result, in the above code, the IndexError exception is not being caught by the except statement; rather, the exception instead ends up being bound to a parameter named IndexError.

The proper way to catch multiple exceptions in an except statement is to specify the first parameter as a tuple containing all exceptions to be caught. Also, for maximum portability, use the as keyword, since that syntax is supported by both Python 2 and Python 3:

>>> try:
...     l = ["a", "b"]
...     int(l[2])
... except (ValueError, IndexError) as e:  
...     pass
...
>>>

Common Mistake #4: Misunderstanding Python scope rules

Python scope resolution is based on what is known as the LEGB rule, which is shorthand for Local, Enclosing, Global, Built-in. Seems straightforward enough, right? Well, actually, there are some subtleties to the way this works in Python. Consider the following:

>>> x = 10
>>> def foo():
...     x += 1
...     print x
...
>>> foo()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<stdin>", line 2, in foo
UnboundLocalError: local variable 'x' referenced before assignment

What’s the problem?

The above error occurs because, when you make an assignment to a variable in a scope, that variable is automatically considered by Python to be local to that scope and shadows any similarly named variable in any outer scope.

Many are thereby surprised to get an UnboundLocalError in previously working code when it is modified by adding an assignment statement somewhere in the body of a function. (You can read more about this here.)

It is particularly common for this to trip up developers when using lists. Consider the following example:

>>> lst = [1, 2, 3]
>>> def foo1():
...     lst.append(5)   # This works ok...
...
>>> foo1()
>>> lst
[1, 2, 3, 5]

>>> lst = [1, 2, 3]
>>> def foo2():
...     lst += [5]      # ... but this bombs!
...
>>> foo2()
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "<stdin>", line 2, in foo
UnboundLocalError: local variable 'lst' referenced before assignment

Huh? Why did foo2 bomb while foo1 ran fine?

The answer is the same as in the prior example, but is admittedly more subtle. foo1 is not making an assignment to lst, whereas foo2 is. Remembering that lst += [5] is really just shorthand for lst = lst + [5], we see that we are attempting to assign a value to lst (therefore presumed by Python to be in the local scope). However, the value we are looking to assign to lst is based on lst itself (again, now presumed to be in the local scope), which has not yet been defined. Boom.

Common Mistake #5: Modifying a list while iterating over it

The problem with the following code should be fairly obvious:

>>> odd = lambda x : bool(x % 2)
>>> numbers = [n for n in range(10)]
>>> for i in range(len(numbers)):
...     if odd(numbers[i]):
...         del numbers[i]  # BAD: Deleting item from a list while iterating over it
...
Traceback (most recent call last):
          File "<stdin>", line 2, in <module>
IndexError: list index out of range

Deleting an item from a list or array while iterating over it is a faux pas well known to any experienced software developer. But while the example above may be fairly obvious, even advanced developers can be unintentionally bitten by this in code that is much more complex.

Fortunately, Python incorporates a number of elegant programming paradigms which, when used properly, can result in significantly simplified and streamlined code. A side benefit of this is that simpler code is less likely to be bitten by the accidental-deletion-of-a-list-item-while-iterating-over-it bug. One such paradigm is that of list comprehensions. Moreover, list comprehensions are particularly useful for avoiding this specific problem, as shown by this alternate implementation of the above code which works perfectly:

>>> odd = lambda x : bool(x % 2)
>>> numbers = [n for n in range(10)]
>>> numbers[:] = [n for n in numbers if not odd(n)]  # ahh, the beauty of it all
>>> numbers
[0, 2, 4, 6, 8]

Common Mistake #6: Confusing how Python binds variables in closures

Considering the following example:

>>> def create_multipliers():
...     return [lambda x : i * x for i in range(5)]
>>> for multiplier in create_multipliers():
...     print multiplier(2)
...

You might expect the following output:

0
2
4
6
8

But you actually get:

8
8
8
8
8

Surprise!

This happens due to Python’s late binding behavior which says that the values of variables used in closures are looked up at the time the inner function is called. So in the above code, whenever any of the returned functions are called, the value of i is looked up in the surrounding scope at the time it is called (and by then, the loop has completed, so i has already been assigned its final value of 4).

The solution to this is a bit of a hack:

>>> def create_multipliers():
...     return [lambda x, i=i : i * x for i in range(5)]
...
>>> for multiplier in create_multipliers():
...     print multiplier(2)
...
0
2
4
6
8

Voilà! We are taking advantage of default arguments here to generate anonymous functions in order to achieve the desired behavior. Some would call this elegant. Some would call it subtle. Some hate it. But if you’re a Python developer, it’s important to understand in any case.

Common Mistake #7: Creating circular module dependencies

Let’s say you have two files, a.py and b.py, each of which imports the other, as follows:

In a.py:

import b

def f():
    return b.x
        
print f()

And in b.py:

import a

x = 1

def g():
    print a.f()

First, let’s try importing a.py:

>>> import a
1

Worked just fine. Perhaps that surprises you. After all, we do have a circular import here which presumably should be a problem, shouldn’t it?

The answer is that the mere presence of a circular import is not in and of itself a problem in Python. If a module has already been imported, Python is smart enough not to try to re-import it. However, depending on the point at which each module is attempting to access functions or variables defined in the other, you may indeed run into problems.

So returning to our example, when we imported a.py, it had no problem importing b.py, since b.py does not require anything from a.py to be defined at the time it is imported. The only reference in b.py to a is the call to a.f(). But that call is in g() and nothing in a.py or b.py invokes g(). So life is good.

But what happens if we attempt to import b.py (without having previously imported a.py, that is):

>>> import b
Traceback (most recent call last):
          File "<stdin>", line 1, in <module>
          File "b.py", line 1, in <module>
    import a
          File "a.py", line 6, in <module>
        print f()
          File "a.py", line 4, in f
        return b.x
AttributeError: 'module' object has no attribute 'x'

Uh-oh. That’s not good! The problem here is that, in the process of importing b.py, it attempts to import a.py, which in turn calls f(), which attempts to access b.x. But b.x has not yet been defined. Hence the AttributeError exception.

At least one solution to this is quite trivial. Simply modify b.py to import a.py within g():

x = 1

def g():
    import a    # This will be evaluated only when g() is called
    print a.f()

No when we import it, everything is fine:

>>> import b
>>> b.g()
1       # Printed a first time since module 'a' calls 'print f()' at the end
1       # Printed a second time, this one is our call to 'g'

Common Mistake #8: Name clashing with Python Standard Library modules

One of the beauties of Python is the wealth of library modules that it comes with “out of the box”. But as a result, if you’re not consciously avoiding it, it’s not that difficult to run into a name clash between the name of one of your modules and a module with the same name in the standard library that ships with Python (for example, you might have a module named email.py in your code, which would be in conflict with the standard library module of the same name).

This can lead to gnarly problems, such as importing another library which in turns tries to import the Python Standard Library version of a module but, since you have a module with the same name, the other package mistakenly imports your version instead of the one within stdlib. This is where bad stuff happens.

Care should therefore definitely be exercised to avoid using the same names as those in the Python Standard Library modules. It’s way easier for you to change the name of a module within your package than it is to file a Python Enhancement Proposal (PEP) to request a name change upstream and to try and get that approved.

Common Mistake #9: Failing to address differences between Python 2 and Python 3

Consider the following file foo.py:

import sys

def bar(i):
    if i == 1:
        raise KeyError(1)
    if i == 2:
        raise ValueError(2)

def bad():
    e = None
    try:
        bar(int(sys.argv[1]))
    except KeyError as e:
        print('key error')
    except ValueError as e:
        print('value error')
    print(e)

bad()

On Python 2, this runs fine:

$ python foo.py 1
key error
1
$ python foo.py 2
value error
2

But now let’s give it a whirl on Python 3:

$ python3 foo.py 1
key error
Traceback (most recent call last):
  File "foo.py", line 19, in <module>
    bad()
  File "foo.py", line 17, in bad
    print(e)
UnboundLocalError: local variable 'e' referenced before assignment

What has just happened here? The “problem” is that, in Python 3, the exception object is not accessible beyond the scope of the except block. (The reason for this is that, otherwise, it would keep a reference cycle with the stack frame in memory until the garbage collector runs and purges the references from memory. More technical detail about this is available here).

One way to avoid this issue is to maintain a reference to the exception object outside the scope of the except block so that it remains accessible. Here’s a version of the previous example that uses this technique, thereby yielding code that is both Python 2 and Python 3 friendly:

import sys

def bar(i):
    if i == 1:
        raise KeyError(1)
    if i == 2:
        raise ValueError(2)

def good():
    exception = None
    try:
        bar(int(sys.argv[1]))
    except KeyError as e:
        exception = e
        print('key error')
    except ValueError as e:
        exception = e
        print('value error')
    print(exception)

good()

Running this on Py3k:

$ python3 foo.py 1
key error
1
$ python3 foo.py 2
value error
2

Yippee!

(Incidentally, our Python Hiring Guide discusses a number of other important differences to be aware of when migrating code from Python 2 to Python 3.)

Common Mistake #10: Misusing the __del__ method

Let’s say you had this in a file called mod.py:

import foo

class Bar(object):
            ...
    def __del__(self):
        foo.cleanup(self.myhandle)

And you then tried to do this from another_mod.py:

import mod
mybar = mod.Bar()

You’d get an ugly AttributeError exception.

Why? Because, as reported here, when the interpreter shuts down, the module’s global variables are all set to None. As a result, in the above example, at the point that __del__ is invoked, the name foo has already been set to None.

A solution would be to use atexit.register() instead. That way, when your program is finished executing (when exiting normally, that is), your registered handlers are kicked off before the interpreter is shut down.

With that understanding, a fix for the above mod.py code might then look something like this:

import foo
import atexit

def cleanup(handle):
    foo.cleanup(handle)


class Bar(object):
    def __init__(self):
        ...
        atexit.register(cleanup, self.myhandle)

This implementation provides a clean and reliable way of calling any needed cleanup functionality upon normal program termination. Obviously, it’s up to foo.cleanup to decide what to do with the object bound to the name self.myhandle, but you get the idea.

Wrap-up

Python is a powerful and flexible language with many mechanisms and paradigms that can greatly improve productivity. As with any software tool or language, though, having a limited understanding or appreciation of its capabilities can sometimes be more of an impediment than a benefit, leaving one in the proverbial state of “knowing enough to be dangerous”.

Familiarizing oneself with the key nuances of Python, such as (but by no means limited to) the issues raised in this article, will help optimize use of the language while avoiding some of its more common pitfalls.

You might also want to check out our Insider’s Guide to Python Interviewing for suggestions on interview questions that can help identify Python experts.

We hope you’ve found the pointers in this article helpful and welcome your feedback.

 Recommended Read: Common Python Mistakes

via Hacker News http://ift.tt/1l1VcV3

Everybody has had the experience of not recognising someone they know—changes in pose, illumination and expression all make the task tricky. So it’s not surprising that computer vision systems have similar problems. Indeed, no computer vision system matches human performance despite years of work by computer scientists all over the world.

That’s not to say that face recognition systems are poor. Far from it. The best systems can beat human performance in ideal conditions. But their performance drops dramatically as conditions get worse. So computer scientists would dearly love to develop an algorithm that can take the crown in the most challenging conditions too.

Today, Chaochao Lu and Xiaoou Tang at the Chinese University of Hong Kong say they’ve done just that. These guys have developed a face recognition algorithm called GaussianFace that outperforms humans for the first time.

The new system could finally make human-level face verification available in applications ranging from smart phone and computer game log-ons to security and passport control.

 Recommended Read: A Face Recognition Algorithm That Outperforms Humans

The first task in any programme of automated face verification is to build a decent dataset to test the algorithm with. That requires images of a wide variety of faces with complex variations in pose, lighting and expression as well as race, ethnicity, age and gender. Then there is clothing, hair styles, make up and so on.

As luck would have it, there is just such a dataset, known as the Labelled Faces in the Wild benchmark. This consists of over 13,000 images of the faces of almost 6000 public figures collected off the web. Crucially, there is more than one image of each person in the database.

There are various other databases but the Labelled faces in the Wild is well known amongst computer scientists is a challenging benchmark.

 Recommended Read: A Face Recognition Algorithm That Outperforms Humans

The algorithm identified all of the image pairs here as matches. But can you tell which are correct and which are wrong? Answer below

The task in facial recognition is to compare two images and determine whether they show the same person. (Try identifying which of the image pairs shown here are correct matches.)

Humans can do this with an accuracy of 97.53 per cent on this database. But no algorithm has come close to matching this performance.

Until now. The new algorithm works by normalising each face into a 150 x 120 pixel image, by transforming it based on five image landmarks: the position of both eyes, the nose and the two corners of the mouth.

 Recommended Read: A Face Recognition Algorithm That Outperforms Humans

It then divides each image into overlapping patches of 25 x 25 pixels and describes each patch using a mathematical object known as a vector which captures its basic features. Having done that, the algorithm is ready to compare the images looking for similarities.

But first it needs to know what to look for. This is where the training data set comes in. The usual approach is to use a single dataset to train the algorithm and to use a sample of images from the same dataset to test the algorithm on.

 Recommended Read: A Face Recognition Algorithm That Outperforms Humans

But when the algorithm is faced with images that are entirely different from the training set, it often fails. “When the [image] distribution changes, these methods may suffer a large performance drop,” say Chaochao and Xiaoou.

 Recommended Read: A Face Recognition Algorithm That Outperforms Humans

Instead, they’ve trained GaussianFace on four entirely different datasets with very different images. For example, one of these datasets is known as the Multi-PIE database and consists of face images of 337 subjects from 15 different viewpoints under 19 different conditions of illumination taken in four photo sessions. Another is a database called Life Photos which contains about 10 images of 400 different people.

 Recommended Read: A Face Recognition Algorithm That Outperforms Humans

Having trained the algorithm on these datasets, they finally let it lose on the Labelled Faces in the Wild database. The goal is to identify matched pairs and to spot mismatched pairs too.

Remember that humans can do this with an accuracy of 97.53 per cent. “Our GaussianFace model can improve the accuracy to 98.52%, which for the first time beats the human-level performance,” say Chaochao and Xiaoou.

 Recommended Read: A Face Recognition Algorithm That Outperforms Humans

That’s an impressive result because of the wide variety of extreme conditions in these photos.

 Recommended Read: A Face Recognition Algorithm That Outperforms Humans

Chaochao and Xiaoou point out that there are many challenges ahead, however. Humans can use all kinds of additional cues to do this task, such as neck and shoulder configuration. “Surpassing the human-level performance may only be symbolically significant,” they say.

Another problem is the time it takes to train the new algorithm, the amount of memory it requires and the running time to identify matches. Some of that can be tackled by parallelising the algorithm and using a number of bespoke computer processing techniques.

 Recommended Read: A Face Recognition Algorithm That Outperforms Humans

Nevertheless, accurate automated face recognition is coming and on this evidence sooner rather than later.

 Recommended Read: A Face Recognition Algorithm That Outperforms Humans

Answer: the vertical image pairs are correct matches. The horizontal image pairs are mismatches that the algorithm got wrong

Ref: http://ift.tt/1qGjYMC : Surpassing Human-Level Face Verification Performance on LFW with GaussianFace

 Recommended Read: A Face Recognition Algorithm That Outperforms Humans

via Hacker News http://ift.tt/1rgMRPQ

 Recommended Read: Debugging Asynchronous JavaScript with Chrome DevTools

Published: March 26th, 2014 Updated: March 31st, 2014 Comments:

Your browser may not support the functionality in this article.

Introduction

A powerful feature that makes JavaScript unique is its ability to work asynchronously via callback functions. Assigning async callbacks let you write event-driven code but it also makes tracking down bugs a hair pulling experience since the JavaScript is not executing in a linear fashion.

Luckily, now in Chrome Canary DevTools, you can view the full call stack of asynchronous JavaScript callbacks!

 Recommended Read: Debugging Asynchronous JavaScript with Chrome DevTools A quick teaser overview of async call stacks.

(We’ll break down the flow of this demo soon.)

Once you enable the async call stack feature in DevTools, you will be able to drill into the state of your web app at various points in time. Walk the full stack trace for event listeners, setInterval, setTimeout, XMLHttpRequest, promises, requestAnimationFrame, MutationObservers, and more.

As you walk the stack trace, you can also analyze the value of any variable at that particular point of runtime execution. It’s like a time machine for your watch expressions!

Let’s enable this feature and take a look at a few of these scenarios.

Enable async debugging in Chrome Canary

Try out this new feature by enabling it in Chrome Canary (build 35 or higher). Go to the Sources panel of Chrome Canary DevTools.

Next to the Call Stack panel on the right hand side, there is a new checkbox for "Async". Toggle the checkbox to turn async debugging on or off. (Although once it’s on, you may not ever want to turn it off.)

 Recommended Read: Debugging Asynchronous JavaScript with Chrome DevTools

Capture delayed timer events and XHR responses

You’ve probably seen this before in Gmail:

 Recommended Read: Debugging Asynchronous JavaScript with Chrome DevTools

If there is a problem sending the request (either the server is having problems or there are network connectivity issues on the client side), Gmail will automatically try re-sending the message after a short timeout.

To see how async call stacks can help us analyze delayed timer events and XHR responses, I’ve recreated that flow with a mock Gmail example. The full JavaScript code can be found in the link above but the flow is as follows:

 Recommended Read: Debugging Asynchronous JavaScript with Chrome DevTools In the diagram above, the methods highlighted in blue are prime spots for this new DevTool feature to be the most beneficial since these methods work asynchronously.

By solely looking at the Call Stack panel in previous versions of DevTools, a breakpoint within postOnFail() would give you little information about where postOnFail() was being called from. But look at the difference when turning on async stacks:

Before  Recommended Read: Debugging Asynchronous JavaScript with Chrome DevTools The Call Stack panel without async enabled.

Here you can see that postOnFail() was initiated from an AJAX callback but no further info.

After  Recommended Read: Debugging Asynchronous JavaScript with Chrome DevTools The Call Stack panel with async enabled.

Here you can see that the XHR was initiated from submitHandler(), which was initiated from a click handler bound from scripts.js. Nice!

With async call stacks turned on, you can view the entire call stack to easily see if the request was initiated from submitHandler() as it was above, or from retrySubmit() as it is below:

 Recommended Read: Debugging Asynchronous JavaScript with Chrome DevTools

From the Call Stack panel, you can also tell if the breakpoint event originated earlier from an UI event like ‘click’, a setTimeout() delay, or any commonly used async callback event.

Watch expressions asynchronously

When you walk the full call stack, your watched expressions will also update to reflect the state that it was in at that time!

 Recommended Read: Debugging Asynchronous JavaScript with Chrome DevTools

Evaluate code from past scopes

In addition to simply watching expressions, you can interact with your code from previous scopes right in the DevTools JavaScript console panel.

Imagine that you are Dr. Who and you need a little help comparing the clock from before you got into the Tardis to "now". From the DevTools console, you can easily evaluate, store, and do calculations on values from across different execution points.

 Recommended Read: Debugging Asynchronous JavaScript with Chrome DevTools Use the JavaScript console in conjunction with async call stacks to debug your code. The above demo can be found here.

Staying within DevTools to manipulate your expressions will save you time from having to switch back to your source code, make edits, and refresh the browser.

Coming soon: Unravel chained promise resolutions

If you thought the previous mock Gmail flow was hard to unravel without the async call stack feature enabled, can you imagine how much harder it would be with more complex asynchronous flows like chained promises? Let’s revisit the final example of Jake Archibald’s tutorial on JavaScript Promises.

Flow diagram from JavaScript Promises.

Here’s a little animation of walking the call stacks in Jake’s async-best-example.html example.

Before  Recommended Read: Debugging Asynchronous JavaScript with Chrome DevTools The Call Stack panel without async enabled.

Notice how the Call Stack panel is pretty short on info when trying to debug promises.

After  Recommended Read: Debugging Asynchronous JavaScript with Chrome DevTools The Call Stack panel with async enabled.

Wow! Such promises. Much callbacks.

Promise support for call stacks will be ready soon, as the promise implementation is switching from the version in Blink to the final one within V8.

In the spirit of walking back in time, if you want to preview async call stacks for promises today, you can check it out in Chrome 33 or Chrome 34. Go to chrome://flags/#enable-devtools-experiments and enable Developer Tools experiments . After you restart Canary, go to the DevTools settings and there will be an option to enable support for async stack traces.

Get insights into your web animations

Let’s go deeper into the HTML5Rocks archives. Remember Paul Lewis’ Leaner, Meaner, Faster Animations with requestAnimationFrame?

Open up the requestAnimationFrame demo and add a breakpoint at the beginning of the update() method (around line 874) of post.html. With async call stacks we get a lot more insights into requestAnimationFrame. And, much like the mock Gmail example, we get to walk all the way back to the initiating event which was a ‘scroll’ event.

Before  Recommended Read: Debugging Asynchronous JavaScript with Chrome DevTools The Call Stack panel without async enabled. After  Recommended Read: Debugging Asynchronous JavaScript with Chrome DevTools And with async enabled.

Track down DOM updates when using MutationObserver

MutationObserver allow us to observe changes in the DOM. In this simple example, when you click on the button, a new DOM node is appended to <div class="rows"></div>.

Add a breakpoint within nodeAdded() (line 31) in demo.html. With async call stacks enabled, you can now walk the call stack back through addNode() to the initial click event.

Before  Recommended Read: Debugging Asynchronous JavaScript with Chrome DevTools The Call Stack panel without async enabled. After  Recommended Read: Debugging Asynchronous JavaScript with Chrome DevTools And with async enabled.

Tips for debugging JavaScript in async call stacks

Name your functions

If you tend to assign all of your callbacks as anonymous functions, you may wish to instead give them a name to make viewing the call stack easier.

For example, take an anonymous function like this:

window.addEventListener('load', function(){
  // do something
});

And give it a name like windowLoaded():

window.addEventListener('load', function windowLoaded(){
  // do something
});

When the load event fires, it will show up in the DevTools stack trace with its function name instead of the cryptic "(anonymous function)". This makes it much easier to see at a glance what’s happening in your stack trace.

Before  Recommended Read: Debugging Asynchronous JavaScript with Chrome DevTools After  Recommended Read: Debugging Asynchronous JavaScript with Chrome DevTools

Explore further

To recap, these are all the asynchronous callbacks in which DevTools will display the full call stack:

  • Timers: Walk back to where setTimeout() or setInterval() was initialized.
  • XHRs: Walk back to where xhr.send() was called.
  • Animation frames: Walk back to where requestAnimationFrame was called.
  • Event listeners: Walk back to where the event was originally bound with addEventListener().
  • MutationObservers: Walk back to where the mutation observer event was fired.

Full call stacks will be coming soon for these experimental JavaScript APIs:

  • Promises: Walk back to where a promise has been resolved.
  • Object.observe: Walk back to where the observer callback was originally bound.

Being able to see the full stack trace of your JavaScript callbacks should keep those hairs on your head. This feature in DevTools will be especially helpful when multiple async events happen in relation to each other, or if an uncaught exception is thrown from within an async callback.

Give it a try in Chrome Canary. If you have feedback on this new feature, drop us a line on the Chrome DevTools Group or file a bug in the Chrome DevTools bug tracker.

 Recommended Read: Debugging Asynchronous JavaScript with Chrome DevTools

via Hacker News http://ift.tt/1gUVPQL

 Recommended Read: Postman: a powerful HTTP client (for Chrome) to test web services

Postman is a powerful HTTP client to help test web services easily and efficiently. Postman let’s you craft simple as well as complex HTTP requests quickly. It also saves requests for future use so that you never have to repeat your keystrokes ever again. Postman is designed to save you and your team tons of time. Check out more features below or just install from the Chrome Web Store to get started.

From the blog

A new update is available. Check out the blog for more details!

Postman is one of the top productivity tools on the Chrome Web Store and used by thousands of developers daily. Rated ★★★★★

Tools of the trade: the browser-based tools the Guardian’s digital team uses for coding

A potential Swiss army knife for web service developers, Postman is a powerful HTTP client to let you test REST web services. With its incredibly clean and intuitive interface and a rich feature set, it’s an ideal way to quickly test your requests when developing a REST app. Being able to switch environment variables, from local testing to deploying to the cloud and testing there, is supremely useful. The low learning curve also means you will be building and testing RESTful web services quickly.

…I resisted Chrome for years because I don’t auto-subscribe to Google hype, but I’ve somewhat grudgingly moved towards it (from years of Firefox andSafari) because the dev tools are so good – the Postman REST client alone sells it for me, it makes driving APIs a doddle.

Paul Tweedy @ UsesThis.com

Senior Technical Architect, BBC

I’d like to thank the Postman REST API Chrome app for making my developing life a billion times easier today

Not sure how I lived without Postman before.

Postman is supported by awesome people

 Recommended Read: Postman: a powerful HTTP client (for Chrome) to test web services

Consume or provide cloud services with the Mashape API Platform & Marketplace.

 Recommended Read: Postman: a powerful HTTP client (for Chrome) to test web services

The all-in-one platform for web APIs.

You can donate through Gumroad or send Paypal donations at abhinav@rickreation.com
Follow Postman for updates and API dev tips on Twitter, Google+ or Github. Postman is brought to you by @a85

 Recommended Read: Postman: a powerful HTTP client (for Chrome) to test web services

via Hacker News http://ift.tt/131hoGB

The field of web designing follows many trends. Some are there to stay forever while some trends come and go. One page website design is one of those evergreen trends in the web designing industry that is not going to go out of fashion. Though, this is not the most common design trend yet many designers follow it. They can put their creativity in the best possible way and can experiment with different elements and things to see how their target audience interact with their work.

 Recommended Read: Creative One Page Website Designs

via Free and Useful Online Resources for Designers and Developers http://ift.tt/1qVrAuX



fly51fly’s rule for life

Loving my family, loving my friends, loving my life, and having hopes of a wonderful future...

爱家人,爱朋友,爱生活,面带微笑期待每一个幸福的明天……

Be Happy!

Categories