rahaprogramming – RaHaprogramming

Android – Working with XML Animations

Adding animations to your app interface will give high quality feel to your android applications. Animations can be performed through either XML or android code. In this tutorial i explained how to do animations using XML notations. I will explain how to do the same using android java code in future tutorials. Here i covered basic android animations like fade in, fade out, scale, rotate, slide up, slide down etc.

In the source code project provided in this tutorial, I wrote separate activity and XML for each animation. Download and play with the code to get familiar with the animations. Following are list of animations covered in this article.

android animations using xml

Basic steps to perform animation

Following are the basic steps to perform an animation on any UI element. Creating animation is very simple, all it needs is creating necessary files and write small pieces of code.

Step 1: Create xml that defines the animation

Create an xml file which defines type of animation to perform. This file should be located under anim folder under res directory (res ⇒ anim ⇒ animation.xml). If you don’t have anim folder in your res directory create one. Following is example of simple fade in animation.

android animation anim folder

<?xml version="1.0" encoding="utf-8"?>
<set xmlns:android="http://schemas.android.com/apk/res/android"android:fillAfter="true" > 

<alpha android:duration="1000"android:fromAlpha="0.0"android:interpolator="@android:anim/accelerate_interpolator"android:toAlpha="1.0" />

 </set>

Step 2: Load the animation

Next in your activity create an object of Animation class. And load the xml animation using AnimationUtils by calling loadAnimation function.

public class FadeInActivity extends Activity{

    TextView txtMessage;

    // Animation

    Animation animFadein;

    @Override

    protected void onCreate(Bundle savedInstanceState) {

        super.onCreate(savedInstanceState);

        setContentView(R.layout.activity_fadein);

        txtMessage = (TextView) findViewById(R.id.txtMessage);

        // load the animation

        animFadein = AnimationUtils.loadAnimation(getApplicationContext(),

                R.anim.fade_in);       

    }

}

Step 3: Set the animation listeners (Optional)

If you want to listen to animation events like start, end and repeat, your activity should implements AnimationListener. This step is optional. If you implement AnimationListener you will have to override following methods.

onAnimationStart – This will be triggered once the animation started
onAnimationEnd – This will be triggered once the animation is over
onAnimationRepeat – This will be triggered if the animation repeats

public class FadeInActivity extends Activity implements AnimationListener {

// set animation listener

animFadein.setAnimationListener(this);

// animation listeners

    @Override

    public void onAnimationEnd(Animation animation) {

        // Take any action after completing the animation

        // check for fade in animation

        if (animation == animFadein) {

            Toast.makeText(getApplicationContext(), “Animation Stopped”,

                    Toast.LENGTH_SHORT).show();

        }

    }

    @Override

    public void onAnimationRepeat(Animation animation) {

        // Animation is repeating

    }

    @Override

    public void onAnimationStart(Animation animation) {

        // Animation started

    }

Step 4: Finally start the animation

You can start animation whenever you want by calling startAnimation on any UI element by passing the type of animation. In this example i am calling fade in animation on TextView.

// start the animationtxtMessage.startAnimation(animFadein);

Complete Code

Following is complete code for FadeInActivity

public class FadeInActivity extends Activity implements AnimationListener {

    TextView txtMessage;

    Button btnStart;

    // Animation

    Animation animFadein;

    @Override

    protected void onCreate(Bundle savedInstanceState) {

        // TODO Auto-generated method stub

        super.onCreate(savedInstanceState);

        setContentView(R.layout.activity_fadein);

        txtMessage = (TextView) findViewById(R.id.txtMessage);

        btnStart = (Button) findViewById(R.id.btnStart);

        // load the animation

        animFadein = AnimationUtils.loadAnimation(getApplicationContext(),

                R.anim.fade_in);

        // set animation listener

        animFadein.setAnimationListener(this);

        // button click event

        btnStart.setOnClickListener(new View.OnClickListener() {

            @Override

            public void onClick(View v) {

                txtMessage.setVisibility(View.VISIBLE);

                // start the animation

                txtMessage.startAnimation(animFadein);

            }

        });

    }

    @Override

    public void onAnimationEnd(Animation animation) {

        // Take any action after completing the animation

        // check for fade in animation

        if (animation == animFadein) {

            Toast.makeText(getApplicationContext(), “Animation Stopped”,

                    Toast.LENGTH_SHORT).show();

        }

    }

    @Override

    public void onAnimationRepeat(Animation animation) {

        // TODO Auto-generated method stub

    }

    @Override

    public void onAnimationStart(Animation animation) {

        // TODO Auto-generated method stub

    }

}

Important XML animation attributes

When working with animations it is better to have through knowledge about some of the important XML attributes which create major differentiation in animation behavior. Following are the important attributes you must known about.

android:duration – Duration of the animation in which the animation should complete

android:startOffset – It is the waiting time before an animation starts. This property is mainly used to perform multiple animations in a sequential manner

android:interpolator – It is the rate of change in animation

android:fillAfter – This defines whether to apply the animation transformation after the animation completes or not. If it sets to false the element changes to its previous state after the animation. This attribute should be use with <set> node

android:repeatMode – This is useful when you want your animation to be repeat

android:repeatCount – This defines number of repetitions on animation. If you set this value to infinite then animation will repeat infinite times

Some useful animations

Following i am giving xml code to perform lot of useful animations. Try to assign different values to xml attributes to see change in animations.

1. Fade In
2. Fade Out
3. Cross Fading
4. Blink
5. Zoom In
6. Zoom Out
7. Rotate
8. Move
9. Slide Up
10. Slide Down
11. Bounce
12. Sequential Animation
13. Together Animation

Fade In

For fade in animation you can use <alpha> tag which defines alpha value. Fade in animation is nothing but increasing alpha value from 0 to 1.

fade_in.xml

<?xml version=”1.0″ encoding=”utf-8″?>

<set xmlns:android=”http://schemas.android.com/apk/res/android

    android:fillAfter=”true” >

    <alpha android:duration=”1000″  android:fromAlpha=”0.0″

        android:interpolator=”@android:anim/accelerate_interpolator”

        android:toAlpha=”1.0″ />

</set>

Fade Out

Fade out is exactly opposite to fade in, where we need to decrease the alpha value from 1 to 0

<?xml version="1.0" encoding="utf-8"?>
<set xmlns:android="http://schemas.android.com/apk/res/android"android:fillAfter="true" > <alphaandroid:duration="1000"android:fromAlpha="1.0"android:interpolator="@android:anim/accelerate_interpolator"android:toAlpha="0.0" /> 

</set>

Cross Fading

Cross fading is performing fade in animation while other element is fading out. For this you don’t have to create separate animation file, you can just use fade_in.xml and fade_out.xml files.

In the following code i loaded fade in and fade out, then performed them on two different UI elements.

TextView txtView1, txtView2;

Animation animFadeIn, animFadeOut;

// load animations

animFadeIn = AnimationUtils.loadAnimation(getApplicationContext(),

                R.anim.fade_in);

animFadeOut = AnimationUtils.loadAnimation(getApplicationContext(),

                R.anim.fade_out);

// set animation listeners

animFadeIn.setAnimationListener(this);

animFadeOut.setAnimationListener(this);

// Make fade in elements Visible first

txtMessage2.setVisibility(View.VISIBLE);

// start fade in animation

txtMessage2.startAnimation(animFadeIn);

// start fade out animation

txtMessage1.startAnimation(animFadeOut);

Blink animation is animating fade out or fade in animation in repetitive fashion. For this you will have to set android:repeatMode=”reverse” and android:repeatCount attributes.

<?xml version="1.0" encoding="utf-8"?>
<set xmlns:android="http://schemas.android.com/apk/res/android"><alpha android:fromAlpha="0.0"android:toAlpha="1.0"android:interpolator="@android:anim/accelerate_interpolator"android:duration="600"android:repeatMode="reverse"android:repeatCount="infinite"/></set>

Zoom In

For zoom use <scale> tag. Use pivotX=”50%” and pivotY=”50%” to perform zoom from the center of the element. Also you need to use fromXScalefromYScale attributes which defines scaling of the object. Keep these value lesser than toXScaletoYScale

<?xml version="1.0" encoding="utf-8"?>
<set xmlns:android="http://schemas.android.com/apk/res/android"android:fillAfter="true" > <scalexmlns:android="http://schemas.android.com/apk/res/android"android:duration="1000"android:fromXScale="1"android:fromYScale="1"android:pivotX="50%"android:pivotY="50%"android:toXScale="3"android:toYScale="3" ></scale>
 </set>

Zoom Out

Zoom out animation is same as zoom in but toXScaletoYScale values are lesser than fromXScalefromYScale

<?xml version="1.0" encoding="utf-8"?>
<set xmlns:android="http://schemas.android.com/apk/res/android"android:fillAfter="true" > <scalexmlns:android="http://schemas.android.com/apk/res/android"android:duration="1000"android:fromXScale="1.0"android:fromYScale="1.0"android:pivotX="50%"android:pivotY="50%"android:toXScale="0.5"android:toYScale="0.5" ></scale> 
</set>

Rotate

Rotate animation uses <rotate> tag. For rotate animation required tags are android:fromDegrees and android:toDegrees which defines rotation angles.

Clock wise – use positive toDegrees value
Anti clock wise – use negative toDegrees value

<?xml version="1.0" encoding="utf-8"?>
<set xmlns:android="http://schemas.android.com/apk/res/android"><rotate android:fromDegrees="0"android:toDegrees="360"android:pivotX="50%"android:pivotY="50%"android:duration="600"android:repeatMode="restart"android:repeatCount="infinite"android:interpolator="@android:anim/cycle_interpolator"/> </set>

Move

In order to change position of object use <translate> tag. It uses fromXDeltafromYDelta for X-direction and toXDeltatoYDelta attributes for Y-direction.

<?xml version="1.0" encoding="utf-8"?><setxmlns:android="http://schemas.android.com/apk/res/android"android:interpolator="@android:anim/linear_interpolator"android:fillAfter="true"> <translateandroid:fromXDelta="0%p"android:toXDelta="75%p"android:duration="800" /></set>

Slide Up

Sliding animation uses <scale> tag only. Slide up can be achieved by setting android:fromYScale=”1.0″ and android:toYScale=”0.0″

<?xml version="1.0" encoding="utf-8"?><set xmlns:android="http://schemas.android.com/apk/res/android"android:fillAfter="true" > <scaleandroid:duration="500"android:fromXScale="1.0"android:fromYScale="1.0"android:interpolator="@android:anim/linear_interpolator"android:toXScale="1.0"android:toYScale="0.0" /> </set>

Slide Down

Slide down is exactly opposite to slide down animation. Just interchange android:fromYScale and android:toYScale values.

<?xml version="1.0" encoding="utf-8"?><set xmlns:android="http://schemas.android.com/apk/res/android"android:fillAfter="true"> <scaleandroid:duration="500"android:fromXScale="1.0"android:fromYScale="0.0"android:interpolator="@android:anim/linear_interpolator"android:toXScale="1.0"android:toYScale="1.0" /> </set>

Bounce

Bounce is just an animation effect where animation ends in bouncing fashion. For this set android:interpolator value to @android:anim/bounce_interpolator. This bounce can be used with any kind animation. Following slide down example uses bounce effect.

<?xml version="1.0" encoding="utf-8"?><set xmlns:android="http://schemas.android.com/apk/res/android"android:fillAfter="true"android:interpolator="@android:anim/bounce_interpolator"> <scaleandroid:duration="500"android:fromXScale="1.0"android:fromYScale="0.0"android:toXScale="1.0"android:toYScale="1.0" /> </set>

Sequential Animation

If you want to perform multiple animation in a sequential manner you have to use android:startOffset to give start delay time. The easy way to calculate this value is to add the duration and startOffset values of previous animation. Following is a sequential animation where set of move animations performs in sequential manner.

<?xml version="1.0" encoding="utf-8"?><set xmlns:android="http://schemas.android.com/apk/res/android"android:fillAfter="true"android:interpolator="@android:anim/linear_interpolator" > <!-- Use startOffset to give delay between animations -->  <!-- Move --><translateandroid:duration="800"android:fillAfter="true"android:fromXDelta="0%p"android:startOffset="300"android:toXDelta="75%p" /><translateandroid:duration="800"android:fillAfter="true"android:fromYDelta="0%p"android:startOffset="1100"android:toYDelta="70%p" /><translateandroid:duration="800"android:fillAfter="true"android:fromXDelta="0%p"android:startOffset="1900"android:toXDelta="-75%p" /><translateandroid:duration="800"android:fillAfter="true"android:fromYDelta="0%p"android:startOffset="2700"android:toYDelta="-70%p" /> <!-- Rotate 360 degrees --><rotateandroid:duration="1000"android:fromDegrees="0"android:interpolator="@android:anim/cycle_interpolator"android:pivotX="50%"android:pivotY="50%"android:startOffset="3800"android:repeatCount="infinite"android:repeatMode="restart"android:toDegrees="360" /> </set>

Together Animation

Performing all animation together is just writing all animations one by one without using android:startOffset

<?xml version="1.0" encoding="utf-8"?><set xmlns:android="http://schemas.android.com/apk/res/android"android:fillAfter="true"android:interpolator="@android:anim/linear_interpolator" > <scalexmlns:android="http://schemas.android.com/apk/res/android"android:duration="4000"android:fromXScale="1"android:fromYScale="1"android:pivotX="50%"android:pivotY="50%"android:toXScale="4"android:toYScale="4" ></scale> <!-- Rotate 180 degrees --><rotateandroid:duration="500"android:fromDegrees="0"android:pivotX="50%"android:pivotY="50%"android:repeatCount="infinite"android:repeatMode="restart"android:toDegrees="360" /> </set>

I hope you like this tutorial, feel free to ask any kind of questions in the comment section.

TEXTUREVIEW: VIDEO CROPPING – FULL SCREEN VIDEO BACKGROUND IN ANDROID APPLICATIONS

Video Cropping

In this part of Android SurfaceView story we are going to create application which will do the following:

  • display video from assets folder using TextureView.
  • display a full screen background video without distorting the video size

This tutorial is derived from the creation of our latest app, TrackBack – available on the play store soon. Some XML names and items have been changed slightly to suite this tutorial. We used royalty free video and a basic video editor to adjust the video size and combine clips.

Final Results:

Step 1 – Preparing

Create Android project and target Android version 4.0. Make sure you have following lines in your AndroidManifest.xml file.

<uses-sdk
    android:minSdkVersion="14"
    android:targetSdkVersion="14"/>

Step 2 – XML

Copy a video file to your assets folder at res/raw. If raw doesn’t exist, make it.

In your values folder create dimen.xml file and add following lines.

<!-- common settings -->
<dimen name="padding_left_right">23dp</dimen>
<dimen name="margin_left_right">23dp</dimen>
<dimen name="margin_left_right_large">50dp</dimen>
<dimen name="margin_top_bottom">30dp</dimen>
<dimen name="margin_top_bottom_lg">90dp</dimen>
<dimen name="margin_btn_lg">50dp</dimen>
<dimen name="text_btn_lg">21dp</dimen>
<dimen name="margin_top_bottom_alt">45dp</dimen>
<dimen name="heading_lg">35sp</dimen>
<dimen name="heading_md">21sp</dimen>

In your values folder create string.xml file and add following lines – adjusting appropriately.

<!-- app required strings -->
<string name="app_name">TrackBack</string>
<!-- Strings related to login -->
<string name="welcome_login">Do you have an account? Sign in</string>
<string name="welcome_msg">Keep track of everyone and everything you love... in real time!</string>

In your layout folder create activity_video_crop.xml file or any appropriate name for this in your app.. and add following lines:

<?xml version="1.0" encoding="utf-8"?>
<androidx.constraintlayout.widget.ConstraintLayout xmlns:android="http://schemas.android.com/apk/res/android"
    xmlns:app="http://schemas.android.com/apk/res-auto"
    xmlns:tools="http://schemas.android.com/tools"
    android:id="@+id/welcome_constraint"
    android:layout_width="match_parent"
    android:layout_height="match_parent"
    tools:context=".Activity_Welcome">

    <FrameLayout
        android:id="@+id/welcome_frame"
        android:layout_width="match_parent"
        android:layout_height="match_parent">

    <TextureView
        android:id="@+id/videoview"
        android:layout_width="fill_parent"
        android:layout_height="fill_parent"
        android:layout_gravity="center"
        app:layout_constraintLeft_toLeftOf="parent"
        app:layout_constraintBottom_toBottomOf="parent"

        >
    </TextureView>

    <androidx.constraintlayout.widget.ConstraintLayout
        android:layout_width="match_parent"
        android:layout_height="match_parent"
        android:background="@color/colorPrimarySeeThrough"
        >
    <ImageView
        android:layout_width="55dp"
        android:layout_height="55dp"
        app:layout_constraintTop_toTopOf="parent"
        app:layout_constraintLeft_toLeftOf="parent"
        app:layout_constraintRight_toRightOf="parent"
        android:layout_marginTop="@dimen/margin_top_bottom_lg"
        android:id="@+id/welcome_logo"
        android:src="@drawable/logo_white"
        />
        <TextView
            android:layout_width="match_parent"
            android:layout_height="wrap_content"
            android:layout_margin="@dimen/margin_left_right"
            android:id="@+id/welcome_text"
          app:layout_constraintTop_toBottomOf="@+id/welcome_logo"
            android:text="@string/app_name"
            android:textSize="@dimen/heading_lg"
            android:textColor="@color/white"
            android:textStyle="bold"
            android:textAlignment="center"
            />
        <TextView
            android:layout_width="match_parent"
            android:layout_height="wrap_content"
            android:layout_marginBottom="90dp"
            android:layout_marginRight="@dimen/margin_left_right"
            android:layout_marginLeft="@dimen/margin_left_right"
            android:id="@+id/welcome_note_2"
          app:layout_constraintBottom_toBottomOf="@+id/btn_start"
            android:text="@string/welcome_msg"
            android:textSize="@dimen/heading_md"
            android:textColor="@color/white"
            android:backgroundTint="@color/white"
            android:textAlignment="center"
            />
        <Button
            android:layout_width="match_parent"
            android:layout_height="wrap_content"
           android:background="@drawable/rounded_top_bottom_blue"
            android:text=" Continue "
            android:textSize="@dimen/text_btn_lg"
            android:textAlignment="center"
            android:gravity="center"
            android:foregroundGravity="center"
            android:textColor="@color/slight_white"
     app:layout_constraintBottom_toBottomOf="@id/welcome_sign_in"
            app:layout_constraintRight_toRightOf="parent"
            app:layout_constraintLeft_toLeftOf="parent"
       android:layout_marginBottom="@dimen/margin_top_bottom_alt"
            android:layout_marginLeft="@dimen/margin_btn_lg"
            android:layout_marginRight="@dimen/margin_btn_lg"
            android:id="@+id/btn_start"
            />
        <TextView
            android:id="@+id/welcome_sign_in"
            android:layout_width="wrap_content"
            android:layout_height="wrap_content"
            app:layout_constraintBottom_toBottomOf="parent"
            app:layout_constraintRight_toRightOf="parent"
            app:layout_constraintLeft_toLeftOf="parent"
            android:textSize="20dp"
            android:textColor="@color/slight_white"
            android:text="@string/welcome_login"
        android:layout_marginBottom="@dimen/margin_top_bottom_lg"
            />
    </androidx.constraintlayout.widget.ConstraintLayout>
    </FrameLayout>
</androidx.constraintlayout.widget.ConstraintLayout>

Note: All that’s required here is the FrameLayout and the TextureView. I prefer using ConstraintLayout as the root to easily position items.

Step 3 – Basic code

Create a new activity class and call it ActivityCrop or something appropriate. Don’t forget to declare it inside AndroidManifest.xml file.

Imports:

package com.rahaprogramming.trackback;

import android.content.Context;
import android.graphics.Matrix;
import android.graphics.SurfaceTexture;
import android.media.MediaPlayer;
import android.net.Uri;
import android.os.Bundle;
import android.util.Log;
import android.view.Surface;
import android.view.TextureView;
import android.widget.FrameLayout;
import androidx.appcompat.app.AppCompatActivity;
public class Activity_Welcome extends AppCompatActivity implements
        TextureView.SurfaceTextureListener, MediaPlayer.OnCompletionListener {
    //declare class variables
    Context context = this;
    Uri video_uri;
    
    // MediaPlayer instance
    private MediaPlayer mMediaPlayer;
    //views
    private TextureView mTextureView;
    FrameLayout frameLayout;
    Surface surface;
    // Original video size - we created this video and knew the size.
    // for unknown size - use meta data
    private float mVideoWidth = 1080;
    private float mVideoHeight = 1440;

    @Override
    protected void onCreate(Bundle savedInstanceState) {
        super.onCreate(savedInstanceState);
        setContentView(R.layout.activity_welcome);
        //Toolbar toolbar = findViewById(R.id.toolbar);
        //setSupportActionBar(toolbar);
        video_uri = Uri.parse("android.resource://"+getPackageName()+"/"+R.raw.trackback);

        //set views
        mTextureView = findViewById(R.id.videoview);
        mTextureView.setClickable(false);
        frameLayout = findViewById(R.id.welcome_frame);

        //init MediaPlayer
        mMediaPlayer = new MediaPlayer();

        //set implemented listeners
        mMediaPlayer.setOnCompletionListener(this);
        mTextureView.setSurfaceTextureListener(this);
    }
    //plays the video
    private void playVideo() {
        //its a big file - use separate thread
        new Thread(new Runnable() {
            public void run() {
                try {
                    mMediaPlayer.setDataSource(context,video_uri);
                    mMediaPlayer.setLooping(true);
                    mMediaPlayer.prepareAsync();
                    // Play video when the media source is ready for playback.
                    mMediaPlayer.setOnPreparedListener(new MediaPlayer.OnPreparedListener() {
                        @Override
                        public void onPrepared(MediaPlayer mediaPlayer) {
                            mediaPlayer.start();
                        }
                    });
                } catch (Exception e) { // I can split the exceptions to get which error i need.
                    Utils.log("Error: "+e.toString());
                    e.printStackTrace();
                }
            }
        }).start();
    }
    @Override
    public void onSurfaceTextureAvailable(SurfaceTexture surfaceTexture, int width, int height) {
        //set surface
        surface = new Surface(surfaceTexture);
        mMediaPlayer.setSurface(surface);
        //update viewable area
        updateTextureViewSize(width, height);
    }

    @Override
    public void onSurfaceTextureSizeChanged(SurfaceTexture surfaceTexture, int width, int height) {
        //set surface
        surface = new Surface(surfaceTexture);
        mMediaPlayer.setSurface(surface);
        //update viewable area
        updateTextureViewSize(width, height);
    }

    //uses the view width to determine best crop to fit the screen
    //@param int viewWidth width of viewport
    //@param int viewHeight height of viewport
    private void updateTextureViewSize(int viewWidth, int viewHeight) {
        float scaleX = 1.0f;
        float scaleY = 1.0f;

        Utils.log(viewWidth+" "+viewHeight+" "+mVideoHeight+" "+mVideoWidth);
        if (mVideoWidth > viewWidth && mVideoHeight > viewHeight) {
            scaleX = mVideoWidth / viewWidth;
            scaleY = mVideoHeight / viewHeight;
        } else if (mVideoWidth < viewWidth && mVideoHeight < viewHeight) {
            scaleY = viewWidth / mVideoWidth;
            scaleX = viewHeight / mVideoHeight;
        } else if (viewWidth > mVideoWidth) {
            scaleY = (viewWidth / mVideoWidth) / (viewHeight / mVideoHeight);
        } else if (viewHeight > mVideoHeight) {
            scaleX = (viewHeight / mVideoHeight) / (viewWidth / mVideoWidth);
        }

        // Calculate pivot points, in our case crop from center
        int pivotPointX = viewWidth / 2;
        int pivotPointY = viewHeight / 2;

        Matrix matrix = new Matrix();
        matrix.setScale(scaleX, scaleY, pivotPointX, pivotPointY);
        //transform the video viewing size
        mTextureView.setTransform(matrix);
        //set the width and height of playing view
        mTextureView.setLayoutParams(new FrameLayout.LayoutParams(viewWidth, viewHeight));
        //finally, play the video
        playVideo();
    }

    @Override
    public boolean onSurfaceTextureDestroyed(SurfaceTexture surface) {
        return false;
    }

    @Override
    public void onSurfaceTextureUpdated(SurfaceTexture surface) {

    }

    // callback when the video is over
    public void onCompletion(MediaPlayer mp) {
        //if this happens.. never will.. just restart the video
        mp.stop();
        mp.release();
    }
    @Override
    protected void onDestroy() {
        super.onDestroy();
        if (mMediaPlayer != null) {
            // Make sure we stop video and release resources when activity is destroyed.
            mMediaPlayer.stop();
            mMediaPlayer.release();
            mMediaPlayer = null;
        }
    }
}

Step 5 – Video cropping

The resizing is done by the updateTextureViewSize method. First we need to calculate scaleX and scaleY factor and set it to Matrix object using method setScale(..). Next pass this matrix to TextureView by setTransform(..) method and you are done.

//uses the view width to determine best crop to fit the screen
    //@param int viewWidth width of viewport
    //@param int viewHeight height of viewport
    private void updateTextureViewSize(int viewWidth, int viewHeight) {
        float scaleX = 1.0f;
        float scaleY = 1.0f;

        Utils.log(viewWidth+" "+viewHeight+" "+mVideoHeight+" "+mVideoWidth);
        if (mVideoWidth > viewWidth && mVideoHeight > viewHeight) {
            scaleX = mVideoWidth / viewWidth;
            scaleY = mVideoHeight / viewHeight;
        } else if (mVideoWidth < viewWidth && mVideoHeight < viewHeight) {
            scaleY = viewWidth / mVideoWidth;
            scaleX = viewHeight / mVideoHeight;
        } else if (viewWidth > mVideoWidth) {
            scaleY = (viewWidth / mVideoWidth) / (viewHeight / mVideoHeight);
        } else if (viewHeight > mVideoHeight) {
            scaleX = (viewHeight / mVideoHeight) / (viewWidth / mVideoWidth);
        }

        // Calculate pivot points, in our case crop from center
        int pivotPointX = viewWidth / 2;
        int pivotPointY = viewHeight / 2;

        Matrix matrix = new Matrix();
        matrix.setScale(scaleX, scaleY, pivotPointX, pivotPointY);
        //transform the video viewing size
        mTextureView.setTransform(matrix);
        //set the width and height of playing view
        mTextureView.setLayoutParams(new FrameLayout.LayoutParams(viewWidth, viewHeight));
        //finally, play the video
        playVideo();
    }

Step 6 – Launch

When you launch application, you should notice that video is cropped correctly and displayed properly now. Of course, when width to height ratio is too big, video looses it quality as is scaled too much – as inImageView.setScaleType(ImageVIew.ScaleType.CENTER_CROP);

New physics rules tested on quantum computer

Aalto researchers have used an IBM quantum computer to explore an overlooked area of physics, and have challenged 100-year-old notions about information at the quantum level.

The rules of quantum physics, which govern how very small things behave, use mathematical operators called Hermitian Hamiltonians. Hermitian operators have underpinned quantum physics for nearly 100 years, but recently, theorists have realized that it is possible to extend its fundamental equations to the use of Hermitian operators that are not Hermitian. The new equations describe a universe with its own peculiar set of rules: For example, by looking in the mirror and reversing the direction of time, you should see the same version of you as in the actual world. In their new paper, a team of researchers led by Docent Sorin Paraoanu used a quantum computer to create a toy universe that behaves according to these new rules. The team includes Dr. Shruti Dogra from Aalto University, first author of the paper, and Artem Melnikov, from MIPT and Terra Quantum.

The researchers made qubits, the part of the quantum computer that carries out calculations, behave according to the new rules of non-Hermitian quantum mechanics. They demonstrated experimentally a couple of exciting results that are forbidden by regular Hermitian quantum mechanics. The first discovery was that applying operations to the qubits did not conserve quantum information—a behavior so fundamental to standard quantum theory that it results in currently unsolved problems like Stephen Hawking’s black hole information paradox. The second exciting result came when they experimented with two entangled qubits.

Entanglement is a type of correlation that appears between qubits, as if they have a magic connection that makes them behave in sync with each other. Einstein was famously uncomfortable with this concept, referring to it as “spooky action at a distance.” Under regular quantum physics, it is not possible to alter the degree of entanglement between two particles by tampering with one of the particles on its own. However, in non-Hermitian quantum mechanics, the researchers were able to alter the level of entanglement of the qubits by manipulating just one of them, a result that is expressly off-limits in regular quantum physics.

“The exciting thing about these results is that quantum computers are now developed enough to start using them for testing unconventional ideas that have been only mathematical so far,” said Sorin Paraoanu. “With the present work, Einstein’s ‘spooky action at a distance’ becomes even spookier. And although we understand very well what is going on, it still gives you the shivers.”

The research also has potential applications. Several novel optical or microwave-based devices developed in recent times do seem to behave according to the new rules. The present work opens the way to simulating these devices on quantum computers.

The paper, “Quantum simulation of parity-time symmetry breaking with a superconducting quantum processor,” is published in Communications Physics.

Elon Musk’s SpaceX now owns about a third of all active satellites in the sky

SpaceX created a swarm of about a thousand satellites that is circulating about 340 miles overhead, and building the constellation has put SpaceX in a “deep chasm” of expenses, according to CEO Elon Musk. The constellation has also raised concerns about potential in-space collisions and the impact on astronomers’ ability to study the night sky. But for some early customers of the $99-per-month Starlink service, the satellites are already improving how rural communities access the internet.With the latest SpaceX launch last week, which carried 60 more internet-beaming satellites into space, the company’s Starlink internet constellation grew to include about 1,000 active satellites — by far the largest array in orbit. SpaceX now owns about one third of all the active satellites in space.More Starlink satellites were put in orbit last year than had been launched by all the rocket providers in the world in 2019.SpaceX has promised its satellite clusters will bring cheap, high-speed internet to the masses by beaming data to every corner of the globe.

The company now says it has roughly 10,000 customers, which proves that Starlink is no longer “theoretical and experimental,” the company said in a February 4 filing with the Federal Communications Commission.

For comparison, Verizon, one of the most popular fiber-optic internet providers, has more than 6 million customers.At least one participant in Starlink’s beta testing program, Steve Opfer, a manager at chipmaker Broadcom (AVGO) who works out of his in rural Wisconsin home, said he “could not be happier” with his service — echoing what dozens of beta testers have said in online forums.

SpaceX gets almost $900 million in federal subsidies to deliver broadband to rural America
SpaceX gets almost $900 million in federal subsidies to deliver broadband to rural America

Whether or not Starlink will become a sustainable business, however, remains to be seen. Musk noted in a tweet Tuesday morning that the company “needs to pass through a deep chasm of negative cash flow over the next year or so to make Starlink financially viable.

He also refloated the idea of one day taking SpaceX’s Starlink business public, saying that could happen “once we can predict cash flow reasonably well.” Musk had said last year that the company had “zero thoughts” about a Starlink IPO.The Starlink network is the largest and most meaningful attempt in history to build a low-latency, space-based internet service for consumers, and Musk noted Tuesday that several previous attempts to create such a network have been abandoned or endured bankruptcy (latency refers to how much lag time or delay is built into a internet service). Systems that require data to travel longer distances, such as more traditional internet satellites that orbit thousands of miles from Earth, create longer lag times. Low-Earth orbit constellations such as Starlink aim to drastically reduce latency by orbiting massive networks of satellites just a few hundred miles over ground. The idea has its critics. Fiber-optic-based internet providers, for example, are pushing back against the federal government’s decision to award SpaceX $885.5 million dollars in subsidies. Professional astronomers are also concerned about light pollution. And the sheer number of satellites that make up the Starlink constellation — and other networks planned by companies such as OneWeb and Amazon — has space experts worried about traffic jams and the risk of collisions that could create plumes of debris. Here’s where those controversies stand, and what SpaceX has done to respond to its critics.

Rural broadband

SpaceX and the FCC are facing blowback after the company was awarded nearly $900 million in subsidies through the FCC’s Rural Digital Opportunity Fund, despite objections from traditional telecom companies and even some regulators.Some beta testers have reported top-of-the-line speeds, but as of late 2020, they were also reportedly experiencing intermittent outages because SpaceX hadn’t launched enough satellites to guarantee continuous coverage. It also remains to be seen how affordable SpaceX’s service will be. CNBC reported in October, citing emails shared with those who expressed interest in becoming Starlink customers, that the service could cost about $99 a month, plus a one-time fee of about $500 for the router and antenna. SpaceX has not yet publicly released Starlink’s price points or terms of service.Musk said in a tweet Tuesday that if Starlink doesn’t fail, “the cost to end users will improve every year.” Yet many still argue that the network will, ultimately, be too expensive to provide the type of paradigm-shifting internet coverage that SpaceX has advertised.Still, beta testers such as Opfer argue that Starlink is a vast improvement over what many residents of rural areas are used to. Before Starlink, he and his wife relied on HughesNet or ViaSat, a more traditional satellite-based internet provider that has large satellties orbiting thousands of miles from Earth, whose services are known to be bogged down by frustrating lag times, or high latency.Opfer’s Starlink connection still has some spotty service, which he attributes to the fact that SpaceX is still building up the constellation. The company has said that the total number of satellites could be as high as 40,000. But “when Starlink works bad, it’s not worse than the best of ViaSat,” Opfer told CNN Business.ViaSat’s head of residential broadband Evan Dixon told CNN Business that ViaSat has invested “tens of millions of dollars in addressing, and mitigating latency that people will experience” using ViaSat’s service. In a recent earnings report the company’s executive chairman, Mark Dankberg, also indicated that the company is skeptical of Starlink’s efficacy. He referred to low-Earth orbit constellations like Starlink as “technologies that are unproven and may not be able to meet the obligations that are associated with them.”

Astronomy and orbital debris

Professional astronomers have been concerned about how Starlink satellites — which are fairly large at 550 lbs- — will impact the ground-based telescopes that have long been at the heart of breakthroughs in astrophysics and cosmology. Through much of last year, astronomers were working with the company on ways to make the satellites appear dimmer in space.

After initially trying a dark coating, SpaceX settled on using a retractable sun visor. Jonathan McDowell, an astronomer at the Harvard-Smithsonian Center for Astrophysics, said those have been present on every Starlink satellite launched since last summer. That has made most of them invisible to the naked eye — a win for communities that want to limit light pollution in night sky.But the satellites do still interfere with observatories that are essential to astronomers’ efforts to study the cosmos. That has scientists scrambling to figure out how to scrub telescope data that is speckled with bright streaks created by the Starlink satellites. That uses up valuable resources that astronomers hoped to put toward their research rather than “trying to clean the bugs off our windshield just so we can see out of our cars,” said Meredith Rawls, a research scientist with the Vera C. Rubin Observatory.McDowell and Rawls applaud SpaceX’s desire to keep satellites at lower altitudes, below 1,000 km (or 310 miles). Keeping Starlink satellites in a lower orbit makes them less of a nuisance for telescopes, and it guarantees that satellites that malfunction will be dragged out of orbit in a matter of months, rather than becoming uncontrollable projectiles that can threaten other satellites for centuries.Astronomers and space traffic experts are still concerned about the lack of regulation around satellite brightness and orbital traffic. OneWeb’s satellite internet constellation, for example, orbits higher than 1,000 km. And if any one of its 6,000 planned satellites malfunctions, it could become a major issue.

Dogecoin Mining: How to Mine Dogecoin – Beginners Guide

So, where would you like to start? The beginning? Great choice. Let’s have a quick look at how Dogecoin got started.

Table of Contents

  • 1. A (Very) Short History of Dogecoin
  • 2. Understanding Crypto Mining Bottom Line
  • 3. What is Dogecoin Mining?
  • 4.Mining Comparison
  • 5. How to Mine Dogecoin  
  • 5.1. Dogecoin Mining: Solo vs Pool
  • 5.2. What You Need To Start Mining Dogecoin
  • 5.3. Dogecoin Mining Hardware
  • 5.4. Dogecoin Mining Software
  • 5.5. Dogecoin Cloud Mining
  • 6. So, Is Dogecoin Mining Profitable?

A (Very) Short History of Dogecoin

In 2013, an Australian named Jackson Palmer and an American named Billy Markus became friends. They became friends because they both liked cryptocurrencies. However, they also thought the whole thing was getting too serious so they decided to create their own.

Palmer and Markus wanted their coin to be more fun and more friendly than other crypto coins. They wanted people who wouldn’t normally care about crypto to get involved.

Dogecoin mining: Dogecoin homepage.

They decided to use a popular meme as their mascot — a Shiba Inu dog.

Dogecoin was launched on December 6th2013. Since then it has become popular because it’s playful and good-natured. Just like its mascot!

Dogecoin has become well-known for its use in charitable acts and online tipping. In 2014, $50,000 worth of Dogecoin was donated to the Jamaican Bobsled Team so they could go to the Olympics. Dogecoin has also been used to build wells in KenyaIsn’t that awesome!

Users of social platforms – like Reddit – can use Dogecoin to tip or reward each other for posting good content.

Dogecoin has the 27th largest market cap of any cryptocurrency.

Note: A market cap (or market capitalization) is the total value of all coins on the market.

So, Dogecoin is a popular altcoin, known for being fun, friendly and kind. It’s a coin with a dog on it! You love it already, don’t you?

Next, I want to talk about how mining works

Understanding Crypto Mining Bottom Line

To understand mining, you first need to understand how cryptocurrencies work. Cryptocurrencies are peer-to-peer digital currencies. This means that they allow money to be transferred from one person to another without using a bank.

Every cryptocurrency transaction is recorded on a huge digital database called a blockchain. The database is stored across thousands of computers called nodes. Nodes put together groups of new transactions and add them to the blockchain. These groups are called blocks.

Each block of transactions has to be checked by all the nodes on the network before being added to the blockchain. If nodes didn’t check transactions, people could pretend that they have more money than they really do (I know I would!).

Confirming transactions (mining) requires a lot of computer power and electricity so it’s quite expensive.

Blockchains don’t have paid employees like banks, so they offer a reward to users who confirm transactions. The reward for confirming new transactions is new cryptocurrency. The process of being rewarded with new currency for confirming transactions is what we call “mining”!

Dogecoin mining meme

It is called mining because it’s a bit like digging for gold or diamonds. Instead of digging with a shovel for gold, you’re digging with your computer for crypto coins!

Each cryptocurrency has its own blockchain. Different ways of mining new currency are used by different coins where different rewards are offered.

So, how do you mine DogecoinWhat’s special about Dogecoin mining? Let’s see…

What is Dogecoin Mining?

Dogecoin mining is the process of being rewarded with new Dogecoin for checking transactions on the Dogecoin blockchain. Simple, right? Well no, it’s not quite that simple, nothing ever is!

Mining Dogecoin is like a lottery. To play the lottery you have to do some work. Well, actually your computer (or node) has to do some work! This work involves the confirming and checking of transactions which I talked about in the last section.

Purchasing Dogecoin takes much less effort, especially when using Binance or Kraken. This is the time when Dogecoin reached its all-time high and keeps increasing in price significantly. So if you do it now, you might still get on that train! The price of Dogecoin increased by more than 300% percent in one day, as you can see in the chart below.Buy Dogecoin: Dogecoin price.Buy Dogecoin on Binance NOW!

Lots of computers work on the same block of transactions at the same time but the only one can win the reward of new coins. The one that earns the new coins is the node that adds the new block of transactions to the old block of transactions. This is completed using complex mathematical equations.

The node that solves the mathematical problem first wins! It can then attach the newly confirmed block of transactions to the rest of the blockchain.

Most cryptocurrency mining happens this way. However, Dogecoin mining differs from other coins in several important areas. These areas are:

  • Algorithm: Each cryptocurrency has a set of rules for mining new currency. These rules are called a mining or hashing algorithm.
  • Block Time: This is the average length of time it takes for a new block of transactions to be checked and added to the blockchain.
  • Difficulty: This is a number that represents how hard it is to mine each new block of currency. You can use the difficulty number to work out how likely you are to win the mining lottery. Mining difficulty can go up or down depending on how many miners there are. The difficulty is also adjusted by the coin’s protocol to make sure that the block time stays the same.
  • Reward: This is the amount of new currency that is awarded to the miner of each new block.

Now, let’s compare how DogeCoin mining works compared to Litecoin and Bitcoin

Mining Comparison

 LitecoinBitcoinDogecoin
AlgorithmScryptSHA-256Scrypt
Difficult6802626.09553511060552899.722798252.1991
Block Time (In Minutes)2.5101
Reward (Per Block)2512.510,000
Reward (Per Block in USD)3,027.3586,391.6327.36

Source: www.coinwarz.com

Bitcoin uses SHA-256 to guide the mining of new currency and the other two use Scrypt. This is an important difference because Scrypt mining needs a lot less power and is a lot quicker than SHA-256. This makes mining easier for miners with less powerful computers. Fans of Litecoin and Dogecoin think that they are fairer than Bitcoin because more people can mine them.

Note: In 2014, Litecoin and Dogecoin merged mining. This means they made it possible to mine both coins in the same process. Dogecoin mining is now linked with Litecoin mining. It’s like two different football teams playing home games in the same stadium!

Mining Dogecoin is a lot faster than mining Litecoin or Bitcoin. The block reward is much higher too!

Don’t get too excited though (sorry!). Dogecoin is still worth a lot less than Bitcoin and Litecoin. A reward of ten thousand Dogecoin is worth less than thirty US Dollars. A reward of 12.5 Bitcoin is currently worth 86,391.63 US Dollars! 

Dogecoin Mining: How to Mine Dogecoin - Beginners Guide

Note: The numbers might be slightly different by the time you’re reading this guide. 

However, it’s not as bad as it sounds. Dogecoin mining difficulty is more than one million times less than Bitcoin mining difficulty. This means you are much more likely to win the block reward when you mine Dogecoin.

Now I’ve told you about what Dogecoin mining is and how it works, would you like to give it a try?

Let’s see what you need to do to become a Dogecoin miner…

How to Mine Dogecoin  

There are two ways to mine Dogecoinsolo (by yourself) or in a Dogecoin mining pool.

Note: A Dogecoin pool is a group of users who share their computing power to increase the odds of winning the race to confirm transactions. When one of the nodes in a pool confirms a transaction, it divides the reward between the users of the pool equally.

Dogecoin Mining: Solo vs Pool

When you mine as a part of a Dogecoin pool, you have to pay fees. Also, when the pool mines a block you will only receive a small portion of the total reward. However, pools mine blocks much more often than solo miners. So, your chance of earning a reward (even though it is shared) is increased. This can provide you with a steady new supply of Dogecoin.

If you choose to mine solo then you risk waiting a long time to confirm a transaction because there is a lot of competition. It could be weeks or even months before you mine your first block! However, when you do win, the whole reward will be yours. You won’t have to share it or pay any fees.

As a beginner, I would recommend joining a Dogecoin pool. This way you won’t have to wait as long to mine your first block of new currency. You’ll also feel like you’re part of the community and that’s what Dogecoin is all about!

What You Need To Start Mining Dogecoin

Before you start Dogecoin mining, you’ll need a few basics. They are:

  • A PC with either Windows, OS X or Linux operating system.
  • An internet connection.
  • A Shiba Inu puppy (just kidding!).

You’ll also need somewhere to keep the Dogecoin you mine. It’s not recommended to choose online wallets, pick software or hardware wallets instead. They include Ledger Nano STrezor Model T and Coinbase. The latter option is a software wallet, whereas the first two are hardware wallets. 

Note: A wallet is like an email account. It has a public address for sending/receiving Dogecoin and a private key to access them. Your private keys are like your email’s password. Private keys are very important and need to be kept completely secure. 

dogecoin mining

There are two different types; a light wallet and a full wallet. To mine Dogecoin, you’ll need the full wallet. It’s called Dogecoin Core.

Now that you’ve got a wallet, you need some software and hardware.

Dogecoin Mining Hardware

You can mine Dogecoin with:

  • Your PC’s CPU: The CPU in your PC is probably powerful enough to mine Dogecoin. However, it is not recommended. Mining can cause less powerful computers to overheat which causes damage.
  • A GPU: GPUs (or graphics cards) are used to improve computer graphics but they can also be used to mine Dogecoin. There are plenty of GPUs to choose from but here are a few to get you started:
  • A Scrypt ASIC Miner: This is a piece of hardware designed to do one job only. Scrypt ASIC miners are programmed to mine scrypt based currencies like Litecoin and Dogecoin. ASIC miners are very powerful. They are also very expensive, very loud and can get very hot! Here’s a few for you to check out:
    • Innosilicon A2 Terminator ($760)
    • Bitmain Antminer L3 ($1,649)
    • BW L21 Scrypt Miner ($7,700)

Dogecoin Mining Software

Whether you’re mining with an ASIC, a GPU or a CPU, you’ll need some software to go with it. You should try to use the software that works best with the hardware you’re using. Here’s a short list of the best free software for each choice of mining hardware;

  • CPU: If you just want to give mining a quick try, using your computer’s CPU will work fine. The only software I would recommend for mining using a CPU only is CPU miner which you can download for free here.
  • GPU: If you mine with a GPU there are more software options. Here are a few to check out;
    • CudaMiner–  Works best with Nvidia products.
    • CGminer–  Works with most GPU hardware.
    • EasyMiner–  User-friendly, so it’s good for beginners.
  • Scrypt ASIC miner:
    • MultiMiner– Great for mining scrypt based currencies like Litecoin and Dogecoin. It can also be used to mine SHA-256 currencies like Bitcoin.
    • CGminer and EasyMiner can also be used with ASIC miners.

Recommendations

You’re a beginner, so keep it simple! When you first start mining Dogecoin I would recommend using a GPU like the Radeon RX 580 with EasyMiner software. Then I would recommend joining a Dogecoin mining pool. The best pools to join are multi-currency pools like Multipool or AikaPool.

Dogecoin Mining: How to Mine Dogecoin - Beginners Guide

If you want to mine Dogecoin but don’t want to invest in all the tech, there is one other option…

Dogecoin Cloud Mining

Cloud mining is mining without mining! Put simply, you rent computer power from a huge data center for a monthly or yearly fee. The Dogecoin is mined at the center and then your share is sent to you.

All you need to cloud mine Dogecoin is a Dogecoin wallet. Then choose a cloud mining pool to join. EobotNice Hash and Genesis Mining all offer Scrypt-based cloud mining for a monthly fee.

There are pros and cons to Dogecoin cloud mining.

The Pros

  • It’s cheaper than setting up your own mining operation. There’s also no hot, noisy hardware lying around the house!
  • As a beginner, there isn’t a lot of technical stuff to think about.
  • You get a steady supply of new currency every month.

The Cons

  • Cloud mining pools don’t share much information about themselves and how they work. It can be hard to work out if a cloud mining contract is a good value for money.
  • You are only renting computer power. If the price of Dogecoin goes down, you will still have to pay the same amount for something that is worthless.
  • Dogecoin pools have fixed contracts. The world of crypto can change very quickly. You could be stuck with an unprofitable contract for two years!
  • It’s no fun letting someone else do the mining for you!

Now you know about all the different ways to mine Dogecoin we can ask the big question, can you make tons of money mining Dogecoin?

So, Is Dogecoin Mining Profitable?

The short answer is, not really. Dogecoin mining is not going to make you a crypto billionaire overnight. One Dogecoin is worth about 0.05 US Dollars.

If you choose to mine Dogecoin solo, it will be difficult to make a profit. You will probably spend more money on electricity and hardware than you will make from Dogecoin mining. Even if you choose a Dogecoin pool or a cloud pool your profits will be small.

However, if you think I am telling you to not mine Dogecoin, then you’re WRONG! Of course, I think you should mine Dogecoin, however, there are simply better ways to make money with DOGE. One of the best options is to start trading. If you decide to do that it’s recommended to choose reliable crypto exchanges, such as Binance.

Make sure not to keep your cryptocurrencies in an online wallet, choose secure wallets instead. Such options include Ledger Nano X and Trezor Model T

But why? Seriously…

Well, you should mine Dogecoin because it’s fun and you want to be a part of the Dogecoin family. Cryptocurrency is going to change the world and you want to be part of that change, right? Mining Dogecoin is a great way to get involved.

Dogecoin is the coin that puts a smile on people’s faces. By mining Dogecoin you’ll be supporting all the good work its community does. You’ll learn about mining from the friendliest gang in crypto. And who knows? In a few years, the Dogecoin you mine now could be worth thousands or even millions! In 2010, Bitcoin was worthless. Think about that!

Only you can choose whether to mine Dogecoin or not. You now know everything you need to know to make your choice. The future is here. So, what are you going to do?

Elon Musk Announces Neuralink Advance Toward Syncing Our Brains With AI

Celebrity engineer Elon Musk today announced a breakthrough in his endeavor to sync the human brain with artificial intelligence. During a live-streamed demonstration involving farm animals and a stage, Musk said that his company Neuralink had built a self-contained neural implant that can wirelessly transmit detailed brain activity without the aid of external hardware.

Musk demonstrated the device with live pigs, one of which had the implant in its brain. A screen above the pig streamed the electrical brain activity being registered by the device. “It’s like a Fitbit in your skull with tiny wires,” Musk said in his presentation. “You need an electrical thing to solve an electrical problem.”

screengrab of pig demo
Screengrab: Randi Klett

Musk’s goal is to build a neural implant that can sync up the human brain with AI, enabling humans to control computers, prosthetic limbs, and other machines using only thoughts. When asked during the live Q&A whether the device would ever be used for gaming, Musk answered an emphatic “yes.”

Musk’s aspirations for this brain-computer interface (BCI) system are to be able to read and write from millions of neurons in the brain, translating human thought into computer commands, and vice versa. And it would all happen on a small, wireless, battery-powered implant unseen from the outside of the body. His company has been working on the technology for about four years. 

Teams of researchers globally have been experimenting with surgically implanted BCI systems in humans for over 15 years. The BrainGate consortium and other groups have used BCI to enable people with neurologic diseases and paralysis to operate tabletstype eight words per minute and control prosthetic limbs using only their thoughts. 

All of this work is highly experimental. Since 2003, fewer than 20 people in the U.S. have received a BCI implant, all for restorative, medical purposes on a research basis. Most of these systems involve hardware protruding from the head, providing power and data transmission.

These external components create the potential risk of infection and aren’t practical outside a research setting. A few groups have experimented in animals with self-contained, fully implanted devices, but not with the capabilities that Neuralink claims to have. 

Neuralink’s implant contains all the necessary components, including a battery, processing chip, and bluetooth radio, along with about a thousand electrode contacts, all on board the device. Each electrode records the activity of somewhere between zero and four neurons in the brain. A thousand of them in a living animal would be the highest number the BCI field has seen from a self-contained implant.   

Neuralink’s device, if it proves capable of transmitting data safely over the long-term, would be a “major advance” says Bolu Ajiboye, an associate professor of biomedical engineering at Case Western Reserve University and a principal investigator with BrainGate, who is not involved with Neuralink. “There are some really smart, innovative people working at Neuralink. They know what they’re doing and I’m excited to see what they present,” he says.

But the company’s data has not yet been vetted by the research community. (Three pigs on a stage isn’t quite the same as peer-reviewed data). How the device can transmit that much data without generating tissue-damaging heat is not yet demonstrated in humans. 

Plus, Neuralink’s device is “pretty big” for the brain, says Ajiboye. Its cylindrical shape measures 23 mm in diameter by 8 mm long—about the size of a stack of 5 U.S. quarters. By comparison, the Utah array, which has been the go-to device for the BrainGate consortium, measures 4 mm x 4 mm. That device involves hardware protruding from the skull and contains about a hundred electrodes, compared to Neuralink’s 1000.  

Neuralink achieved the advance by experimenting with different materials, upgrading the antennae and wirelessly transmitting only heavily compressed embeddings of neural data from the implant, along with other optimizations made possible through a fast feedback cycle, says Max Hodak, president of Neuralink, who spoke with Spectrum prior to today’s live demonstration. One of the company’s latest prototypes is made of monolithically cast forms of glass that are laser welded together and hermetically sealed. The device so far has lasted safely in pigs for two months, says Hodak. 

During today’s demonstration, which was held at Neuralink’s headquarters in Fremont, California, three pigs were led into corrals where they were able to move about freely in front of a small (human) audience. Gertrude, the pig with the implant, didn’t want to come out to her corral at first, leaving Musk stranded in front of over 150,000 online viewers. She did eventually come out, with her brain activity streamed on a screen above her. Every time she sniffed the electrical activity in her brain spiked. 

Once this kind of brain wave data is obtained, the big question is how to decode and interpret it. “Neural decoding is critically important,” says Ajiboye. “A number of laboratories around the world are spending lots of person-hours on decoding algorithms, using different statistical and deep learning approaches. I haven’t seen that from Neuralink.” 

Neuralink has developed a surgical robot capable of inserting the implant’s electrodes at shallow depths into the brain. Robotic precision reduces the risk of damage to brain tissue.  

Neuralink’s first applications for the technology will be for medical purposes, likely for people with spinal cord injuries. Musk, in bold fashion, has said he wants to pursue non-medical applications too, further in the future. This has led to a lot of hype in the media. 

“We as a field need to be very responsible about what we’re claiming the technology can do, and what application we’re driving toward,” says Ajiboye. “By Elon Musk being in this field there’s a lot of attention being brought to it. That is welcome, but there are challenges posed there. One of those challenges is hype versus reality.” He adds: “Neuralink has entered this race and is riding a fast horse, but there are other devices in development.” 

Top 10 Most Common Mistakes That Android Developers Make: A Programming Tutorial

 Android programming continues to improve. The platform has matured quite a bit since the initial AOSP release, and set the user expectations bar quite high. Look how good the new Material design pattern looks!

There are thousands of different devices, with different screen sizes, chip architectures, hardware configurations, and software versions. Unfortunately, segmentation is the price to pay for openness, and there are thousands of ways your app can fail on different devices, even as an advanced Android programmer.

Regardless of such huge segmentation, the majority of bugs are actually introduced because of logic errors. These bugs are easily prevented, as long as we get the basics right!

Here’s an Android programming tutorial to address the 10 most common mistakes Android developers make.

Learn Android programming at a more advanced level with this tutorial.

Common Mistake #1: Developing for iOS

To my great pleasure, this Android mistake is far less common nowadays (partially because clients are beginning to realize that the days when Apple was setting all the design standards are long gone). But still, every now and then, we see an app that is an iOS clone.

Don’t get me wrong, I’m not an Android programming evangelist! I respect every platform that moves the mobile world a step forward. But, it’s 2021 and users have been using Android for quite a while now, and they’ve grown accustomed to the platform. Pushing iOS design standards to them is a terrible strategy!

Unless there is a super good reason for breaking the guidelines, don’t do it. (Google does this all the time, but never by copy-pasting.)

Here are some of the most common examples of this Android mistake:

  1. You should not be making static tabs, and they don’t belong on the bottom (I’m pointing at you Instagram).
  2. System notification icons should not have color.
  3. App icons should not be placed inside a rounded rectangle (unless that’s your actual logo ex. facebook).
  4. Splash screens are redundant beyond the initial setup/introduction. Do not use them in other scenarios.
  5. Lists should not have carets.

These are just a few of the many other small things that can ruin the user experience.

Common Mistake #2: Developing for Your Android Device

Unless you are building a kiosk/promo app for a single tablet, chances are your Android app won’t look good on every device. Here are a few Android programming tips to remember:

There are literally thousands of possible scenarios, but after a while you develop a sense for covering them all with a handful of cases.

You don’t own thousands of devices? Not a problem. The Android Emulator is super good in replicating physical devices. Even better, try out Genymotion, it’s lightning fast and comes with a lot of different popular preset devices.

Also, have you tried rotating your device? All hell can break loose…

Common Mistake #3: Not Using Intents

Intents are one of Android’s key components. It’s a way of passing data between different parts of the app or, even better, different apps on the system.

Let’s say you have a gallery app that can share a download link to some images via SMS. Which of the two options seems more logical?

Option 1:

  • Request the SEND_SMS permission. <uses-permission android:name="android.permission.SEND_SMS" />
  • Write your own code for sending SMS using the SmsManager.
  • Explain to your users why your gallery app needs access to services that can cost money, and why they have to grant this permission to use your app.

Option 2:

  • Start an SMS Intent and let an app designed for SMS do the work Intent sendIntent = new Intent(Intent.ACTION_VIEW); sendIntent.setData(Uri.parse("sms:" + telephoneNumber)); sendIntent.putExtra("sms_body", x); startActivity(sendIntent);

In case that you have any doubts, best solution is option 2!

This approach can be applied to almost anything. Sharing content, taking pictures, recording video, picking contacts, adding events, opening links with native apps, etc.

Unless there is a good reason to make a custom implementation (ex., a camera that applies filters), always use Intents for these scenarios. It will save you a lot of programming time, and strip the AndroidManifest.xml of unnecessary permissions.

Common Mistake #4: Not Using Fragments

A while ago in Honeycomb, Android introduced the concept of fragments. Think of them as separate building blocks with their own (rather complex) life cycles that exist inside an Activity. They help a lot with optimizing for various screens, they are easily managed by their parent activity, can be reused, combined and positioned at will.

Launching a separate activity for each app screen is terribly inefficient, since the system will try to keep them in memory as long as it can. Killing one won’t free the resources used by the others.

This Android programming tutorial recommends the proper use of fragments to make your app more efficient.

Unless you want to dig deep into the Android core and read this article, advocating against fragment usage, you should use fragments whenever possible. It basically says that fragments and cursor loaders have good intended purpose, but poor implementation.

Common Mistake #5: Blocking the Main Thread

The main thread has a single purpose: keeping the user interface responsive.

Although the science behind measuring the frame rate our eyes/brain can perceive is complex and influenced by a lot of factors, a general rule is that anything below 24 fps with delay greater than 100 ms won’t be perceived as smooth.

This means that the user’s actions will have a delayed feedback, and the Android app you have programmed will stop responding. Stripping the user of his control over the app leads to frustration, frustrated users tend to give very negative feedback.

Even worse, if the main thread is blocked for a while (5 seconds for Activities, 10 for Broadcast Receivers), ANR will happen.

As you learn Android programming, you will come to know and fear this message.  Follow these Android programming tips to minimize this occurrence.

This was so common in Android 2.x, that on newer versions the system won’t let you make network calls in the main thread.

To avoid blocking the main thread, always use worker/background threads for: 1. network calls 2. bitmap loading 3. image processing 4. database querying 5. SD reading / writing

Common Mistake #6: Reinventing the Wheel

“OK, I won’t use the main thread. I’ll write my own code that communicates with my server in a background thread.”

No! Please don’t do that! Network calls, image loading, database access, JSON parsing, and social login are the most common things you do in your app. Not just yours, every app out there. There is a better way. Remember how Android has matured and grown as a platform? Here’s a quick list of examples:

  1. Use gradle as a build system.
  2. Use Retrofit / Volley for network calls.
  3. Use Picasso for image loading.
  4. Use Gson / Jackson for JSON parsing.
  5. Use common implementations for social login.

If you need something implemented, chances are it’s already written, tested and used widely. Do some basic research and read some Android programming tutorials before writing your own code!

Common Mistake #7: Not Assuming Success

Great. We have learned that there is a better way for handling long running tasks, and we are using well documented libraries for that purpose. But the user will still have to wait. It’s inevitable. Packages are not sent, processed and received instantly. There is a round trip delay, there are network failures, packages get lost, and dreams get destroyed.

But all this is measurable. Successful network calls are far more likely than unsuccessful ones. So why wait for server response before handling the successful request? It’s infinitely better to assume success and handle failure. So, when a user likes a post the like count is immediately increased, and in unlikely event that the call failed, the user is notified.

In this modern world immediate feedback is expected. People don’t like to wait. Kids don’t want to sit in a classroom obtaining knowledge that has uncertain future payoff. Apps must accommodate to the user’s psychology.

Common Mistake #8: Not Understanding Bitmaps

Users love content! Especially when the content is well formatted and looks nice. Images, for instance, are extremely nice content, mainly due to their property of conveying a thousand words per image. They also consume a lot of memory. A lot of memory!

Before an image is displayed on the screen, it has to be loaded into the memory. Since bitmaps are the most common way to do this, we’re going to provide an Android programming guide for the whole process:

Let’s say you want to display an image on your screen that you just took with your camera. The total memory needed for this is calculated with the following formula: memory_needed_in_bytes = 4 * image_width * image_height;

Why 4? Well, the most common / recommended bitmap configuration is ARGB_8888. That means that for each pixel we draw, we need to keep 8 bits (1 byte) for the alpha, the red, the greed and the blue channel in memory, in order to properly display it. There are alternatives, like the RGB_565 configuration that requires half the memory than ARGB_8888, but loses the transparency and the color precision (while maybe adding a green tint).

Let’s assume you have a brand new device with full HD screen and 12 MP camera. The picture you just took is 4000×3000 pixels large and the total memory needed to display it is: 4 bytes * 4000 * 3000 = 48 MB

48 megabytes of your RAM just for a single image!? That’s a lot!

Now let’s take the screen resolution into consideration. You are trying to show a 4000×3000 image on a screen that has 1920×1080 pixels, in worst case scenario (displaying the image full screen) you shouldn’t allocate more than 4 * 1920 * 1080 = 8.3 MB of memory.

Always follow the Android programming tips for displaying bitmaps efficiently:

  1. Measure the view you’re showing your images in.
  2. Scale / crop the large image accordingly.
  3. Show only what can be displayed.

Common Mistake #9: Using Deep View Hierarchy

Layouts have an XML presentation in Android. In order to draw content, the XML needs to be parsed, the screen needs to be measured, and all the elements need to be placed accordingly. It’s a resource- and time-consuming process that needs to be optimized.

This is how the ListView (and more recently the RecyclerView) works.

If a layout has been inflated once, the system reuses it. But still, inflating the layout must happen at some point.

Let’s say you want to make a 3×3 grid with images. One way of doing this is a vertical LinearLayout containing 3 LinearLayouts with equal weight, each of them containing 3 ImageViews with equal weight.

Some Android programming beginners don’t always make the best use of LinearLayouts.

What do we get with this approach? A warning that “nested weights are bad for performance”.

There is a saying in the Android programming world, that I just made up: “With little effort all hierarchy can be flattened”.

In this case RelativeLayout or GridLayout will efficiently replace the nested LinearLayouts.

Common Mistake #10: Not Setting the minSdkVersion to 14

Well, this is not a mistake, but it is bad practice.

Android 2.x was a huge milestone in developing this platform, but some things should be left behind. Supporting older devices adds more complexity for code maintenance and limits the development process.

The numbers are clear, the users have moved on, the developers shouldn’t stay behind.

I’m aware that this doesn’t apply for some big markets with old devices (ex. India), and setting the minSdkVersion to 14, on the Facebook App, means leaving couple of million users without their favorite social network. But, if you are starting fresh and trying to create a beautiful experience for your users, do consider eliminating the past. Users that don’t have the resources, or feel the need to upgrade their device/OS, won’t have the incentive to try out a superior version of your Android app and ultimately spend money on it.

Wrap Up

Android is a powerful platform that evolves quickly. It may not be reasonable to expect users to keep up the pace, but it’s crucial for the Android developers to do so.

Knowing that Android is not just on our phones or tablets is even more important. It’s on our wrists, in our living rooms, in our kitchens, and in our automobiles. Getting the basics right is of utmost importance before we start expanding.

Artificial Intelligence Stocks To Buy And Watch Amid Rising AI Competition

Artificial intelligence stocks are rarer than you might think. Many companies tout AI technology initiatives and machine learning. But there really are few — if any — public, pure-play artificial intelligence stocks.

The “AI” stock ticker, though, has been claimed by C3.ai. The Redwood City, Calif.-based company sells AI software for the enterprise market. The initial public offering of C3.ai on Dec. 9 raised $651 million.

In general, look for companies using AI technology to improve products or gain a strategic edge, such as Netflix (NFLX).  Intel (INTC), Alphabet‘s (GOOGL) Google and Microsoft (MSFT) are among those making the most investments in AI startups, said a CB Insights venture capital report.

Microsoft belongs to the IBD Leaderboard. Further, the Leaderboard is IBD’s curated list of leading stocks that stand out on technical and fundamental metrics.

Chip maker Nvidia (NVDA) also belongs to the Leaderboard. Nvidia in September agreed to buy Arm Holdings from Softbank for $40 billion.

In a report to clients, RBC Capital analyst Mitch Steves said that: “Nvidia will extend its architecture and offer artificial intelligence or ‘acceleration in a box’ for all ARM-based chips. He added: “Instead of looking at ARM as a potential CPU play alone, we think the bigger picture is that 22 billion-plus ARM chips can be accelerated with AI.”

AI Stocks: Cloud Computing Battle Heats Up

All AI software needs computing power to find patterns and make inferences from large quantities of data. The race is on to build AI chips for data centers, self-driving cars, robotics, smartphones, drones and other devices.

Nvidia is the leading provider of AI chips for cloud computing and other applications. Analysts expect the battle in AI chips for data-center applications to heat up.

AI chipmaker Graphcore recently raised $222 million at a $2.77 billion valuation. It may file an IPO in 2021.

Amazon Web Services, the cloud unit of Amazon.com (AMZN), recently said it would offer Intel’s Habana AI chips to its customers. Intel in late 2019 acquired Israel-based Habana Labs for $2 billion.

At its virtual re:Invent conference in December, AWS claimed to have “the broadest and most complete set of machine-learning capabilities” among cloud computing service providers. AWS also unveiled a new machine learning training chip, Trainium.

Microsoft’s Azure and Google’s cloud computing unit also sell AI analytical services to business customers.

AI technology uses computer algorithms. The software programs aim to mimic the human ability to learn, interpret patterns and make predictions.

“Machine learning” is the most widely used form of AI deployed in industries. Machine learning systems use huge troves of data to train algorithms to recognize patterns and make predictions.

Software Companies Integrate AI Tools

Aside from chip makers, some software companies are among artificial intelligence stocks to watch. Many software-as-a-service companies use AI tools. Further, Workday (WDAY) showcased its AI and machine learning product innovation at a digital transformation investor event on Oct. 20.

San Mateo, Calif.-based Coupa (COUP) on Nov. 3 agreed to buy Llamasoft, a provider of AI-powered supply chain software, for about $1.5 billion. Llamasoft’s customers include Boeing (BA) and Home Depot (HD).

Enterprise software maker ServiceNow (NOW) has been making AI acquisitions. Under new Chief Executive Bill McDermott, ServiceNow in January acquired two AI companies, Passage AI and Loom Systems.

In addition, ServiceNow owns a Relative Strength Rating of 81 out of a possible 99. Further, ServiceNow stock belongs to the IBD Leaderboard.

DocuSign (DOCU) on Feb. 27 agreed to buy Seal Software for $188 million. The startup uses artificial intelligence for contract analytics.

AI Stocks In Consumer Applications

It’s no secret that Alphabet, Microsoft, Facebook (FB) and Amazon are all spending big bucks on AI technology. The tech giants are putting AI in consumer products and services, such as voice-activated smart home devices. Google and Facebook use AI tools in digital advertising.

Amazon uses AI to customize online retail offerings and recommend products to website visitors. Facebook uses AI to enhance its activity feed, photo and social media apps.

Meanwhile, Netflix utilizes AI to personalize its internet TV content for subscribers. Netflix stock also is on the Leaderboard.

Omdia forecasts that annual AI software revenue will increase from $9.7 billion worldwide in 2018 to $119.3 billion in 2025.

Artificial Intelligence Stocks Span Industries

In addition, AI competition is fierce in many industries. They include financial services, pharmaceuticals, health care  and cybersecurity. Worldwide spending on AI software for retail uses will boom to $9.8 billion in 2025, up from $1.3 billion in 2019, forecasts Omdia.

In the energy industry, startup C3.ai has teamed with Baker Hughes (BKR) and Microsoft to use artificial intelligence in preventive maintenance. Thomas Siebel, who started Siebel Systems and sold it to Oracle (ORCL) for nearly $6 billion in 2006, founded C3.ai.

In October, Microsoft and Adobe Systems (ADBE) partnered with artificial intelligence startup C3.ai to sell customer relationship management software, Salesforce.com‘s (CRM) core business.

There’s plenty AI competition in enterprise software.

Meanwhile, Salesforce’s Einstein tools improve sales forecasts. The AI software uses a company’s historical lead and account data to predict which deals are more likely to close. Salesforce has expanded Einstein tools into financial services and other markets.

Salesforce on Nov. 24 said its Einstein platform now delivers more than 80 billion AI-powered predictions daily for sales, service, marketing and commerce. That’s up from 6.5 billion in October 2019.

Adobe: A Leaderboard Stock In AI

In e-commerce, Adobe’s AI tools personalize website content to spotlight products or services that online shoppers are most likely to buy. Also, Adobe also belongs to the IBD Leaderboard.

Further, The IBD 50 roster of growth stocks has featured artificial intelligence stocks in online dating, digital advertising and business communications.

In addition, other companies using AI include:

Square (SQ): Square Capital, part of digital payment processor, provides loans to merchants. Square Capital uses an AI-driven credit assessment platform in granting new loans.

Match Group (MTCH): Controlled by IAC (IAC), Match is using artificial intelligence to improve its Tinder mobile dating app. Tinder’s new “Super Likable” feature uses machine learning.

Trade Desk (TTD): The digital advertising firm provides automated tools to help customers buy online ads and optimize return on spending. Trade Desk’s AI tools identify the best websites to buy ads on.

Cybersecurity Firms Among Artificial Intelligence Stocks

Here are other stocks to consider:

• Five9 (FIVN): A provider of cloud-based contact center software, Five9 is developing machine learning algorithms that help companies automate customer support. Five9 is partnering with Google on AI contact center software.

• Visa (V) and Mastercard (MA): The credit card networks use AI tools to detect financial crimes such as fraud and money laundering. In addition, big banks use AI in chat bots that provide online customer services.

• Palo Alto Networks (PANW) and Fortinet (FTNT): With artificial intelligence, the cybersecurity firms aim to spot and block malicious activity on computer networks better than existing technologies can.

Omdia forecasts that AI chipsets and accelerators for “edge” applications will grow to $51.9 billion by 2025, up from $7.7 billion in 2019. Those apps include mobile phones, automotive, drones, security cameras, robots and smart speakers.

In addition, memory chip makers such as Micron Technology (MU) should get a boost, analysts say. That’s because intelligent devices will need more more memory to process AI apps.

U.S., China Battle In Artificial Intelligence

Semiconductor manufacturing equipment makers such as Applied Materials (AMAT) expect AI to boost demand for high-end gear. Test equipment makers such as Teradyne (TERcould get a boost from AI chips as well.

In addition, others to keep an eye on include IBM (IBM), Accenture (ACN), Epam Systems (EPAM) and other IT services companies.

Also, the U.S. is racing versus China and other countries to develop artificial intelligence technology. In January, the U.S. government placed restrictions on the export of AI software.

Further, the use of artificial intelligence in facial recognition and some other areas has become controversial. Alphabet CEO Sundar Pichai has called for regulation of artificial intelligence.

In addition, Investors interested in AI technology also could consider the TCW Artificial Intelligence Equity Fund (TGFTX). It’s primarily for institutions but is open to retail investors.