cs231n作业:Assignment2-Fully-Connected Neural Nets_class twolayernet(object): """ 采用模块化设计实现具有relu和sof-程序员宅基地

技术标签: cs231n  

fc_net.py

from builtins import range
from builtins import object
import numpy as np

from cs231n.layers import *
from cs231n.layer_utils import *

class TwoLayerNet(object):
    """
    A two-layer fully-connected neural network with ReLU nonlinearity and
    softmax loss that uses a modular layer design. We assume an input dimension
    of D, a hidden dimension of H, and perform classification over C classes.

    The architecure should be affine - relu - affine - softmax.

    Note that this class does not implement gradient descent; instead, it
    will interact with a separate Solver object that is responsible for running
    optimization.

    The learnable parameters of the model are stored in the dictionary
    self.params that maps parameter names to numpy arrays.
    """

    def __init__(self, input_dim=3*32*32, hidden_dim=100, num_classes=10,
                 weight_scale=1e-3, reg=0.0):
        """
        Initialize a new network.

        Inputs:
        - input_dim: An integer giving the size of the input
        - hidden_dim: An integer giving the size of the hidden layer
        - num_classes: An integer giving the number of classes to classify
        - weight_scale: Scalar giving the standard deviation for random
          initialization of the weights.
        - reg: Scalar giving L2 regularization strength.
        """
        self.params = {
    }
        self.reg = reg

        ############################################################################
        # TODO: Initialize the weights and biases of the two-layer net. Weights    #
        # should be initialized from a Gaussian centered at 0.0 with               #
        # standard deviation equal to weight_scale, and biases should be           #
        # initialized to zero. All weights and biases should be stored in the      #
        # dictionary self.params, with first layer weights                         #
        # and biases using the keys 'W1' and 'b1' and second layer                 #
        # weights and biases using the keys 'W2' and 'b2'.                         #
        ############################################################################
        #w高斯分布,b初始化0
        mu = 0
        sigma = weight_scale
        self.params['W1'] = mu + sigma * np.random.randn(input_dim,hidden_dim)

        self.params['b1'] = np.zeros(hidden_dim)

        self.params['W2'] = mu + sigma * np.random.randn(hidden_dim,num_classes)
        self.params['b2'] = np.zeros(num_classes)

        pass
        ############################################################################
        #                             END OF YOUR CODE                             #
        ############################################################################


    def loss(self, X, y=None):
        """
        Compute loss and gradient for a minibatch of data.

        Inputs:
        - X: Array of input data of shape (N, d_1, ..., d_k)
        - y: Array of labels, of shape (N,). y[i] gives the label for X[i].

        Returns:
        If y is None, then run a test-time forward pass of the model and return:
        - scores: Array of shape (N, C) giving classification scores, where
          scores[i, c] is the classification score for X[i] and class c.

        If y is not None, then run a training-time forward and backward pass and
        return a tuple of:
        - loss: Scalar value giving the loss
        - grads: Dictionary with the same keys as self.params, mapping parameter
          names to gradients of the loss with respect to those parameters.
        """
        scores = None
        ############################################################################
        # TODO: Implement the forward pass for the two-layer net, computing the    #
        # class scores for X and storing them in the scores variable.              #
        ############################################################################
        N = X.shape[0]
        D = np.prod(X.shape[1:])
        x_in = X
        x_in = x_in.reshape(N, D)
        fc1 = x_in.dot(self.params['W1']) + self.params['b1']
        relu = np.maximum(0, fc1)
        fc2 = relu.dot(self.params['W2']) + self.params['b2']
        scores = fc2
        pass
        ############################################################################
        #                             END OF YOUR CODE                             #
        ############################################################################

        # If y is None then we are in test mode so just return scores
        if y is None:
            return scores

        loss, grads = 0, {
    }
        ############################################################################
        # TODO: Implement the backward pass for the two-layer net. Store the loss  #
        # in the loss variable and gradients in the grads dictionary. Compute data #
        # loss using softmax, and make sure that grads[k] holds the gradients for  #
        # self.params[k]. Don't forget to add L2 regularization!                   #
        #                                                                          #
        # NOTE: To ensure that your implementation matches ours and you pass the   #
        # automated tests, make sure that your L2 regularization includes a factor #
        # of 0.5 to simplify the expression for the gradient.                      #
        ############################################################################

        W1, b1 = self.params['W1'], self.params['b1']
        W2, b2 = self.params['W2'], self.params['b2']

        loss, dout2 = softmax_loss(scores, y)
        loss += 0.5 * self.reg * (np.sum(W1*W1)+np.sum(W2*W2))

        cache2 = relu, W2, b2
        dx2,dw2,db2 = affine_backward(dout2, cache2)
        grads['W2'] = dw2 + self.reg * self.params['W2'] #注意不要忘记 loss 中W1, W2 的贡献 和正则化参数
        grads['b2'] = db2

        cache1 = fc1
        dout1 = dx2
        dout =  relu_backward(dout1, cache1)

        cache = X, W1, b1
        dx1,dw1,db1 = affine_backward(dout, cache)
        grads['W1'] = dw1 + self.reg * self.params['W1']
        grads['b1'] = db1
        pass
        ############################################################################
        #                             END OF YOUR CODE                             #
        ############################################################################

        return loss, grads




class FullyConnectedNet(object):
    """
    A fully-connected neural network with an arbitrary number of hidden layers,
    ReLU nonlinearities, and a softmax loss function. This will also implement
    dropout and batch/layer normalization as options. For a network with L layers,
    the architecture will be

    {affine - [batch/layer norm] - relu - [dropout]} x (L - 1) - affine - softmax

    where batch/layer normalization and dropout are optional, and the {...} block is
    repeated L - 1 times.

    Similar to the TwoLayerNet above, learnable parameters are stored in the
    self.params dictionary and will be learned using the Solver class.
    """

    def __init__(self, hidden_dims, input_dim=3*32*32, num_classes=10,
                 dropout=1, normalization=None, reg=0.0,
                 weight_scale=1e-2, dtype=np.float32, seed=None):
        """
        Initialize a new FullyConnectedNet.

        Inputs:
        - hidden_dims: A list of integers giving the size of each hidden layer.
        - input_dim: An integer giving the size of the input.
        - num_classes: An integer giving the number of classes to classify.
        - dropout: Scalar between 0 and 1 giving dropout strength. If dropout=1 then
          the network should not use dropout at all.
        - normalization: What type of normalization the network should use. Valid values
          are "batchnorm", "layernorm", or None for no normalization (the default).
        - reg: Scalar giving L2 regularization strength.
        - weight_scale: Scalar giving the standard deviation for random
          initialization of the weights.
        - dtype: A numpy datatype object; all computations will be performed using
          this datatype. float32 is faster but less accurate, so you should use
          float64 for numeric gradient checking.
        - seed: If not None, then pass this random seed to the dropout layers. This
          will make the dropout layers deteriminstic so we can gradient check the
          model.
        """
        self.normalization = normalization
        self.use_dropout = dropout != 1
        self.reg = reg
        self.num_layers = 1 + len(hidden_dims)
        self.dtype = dtype
        self.params = {
    }

        ############################################################################
        # TODO: Initialize the parameters of the network, storing all values in    #
        # the self.params dictionary. Store weights and biases for the first layer #
        # in W1 and b1; for the second layer use W2 and b2, etc. Weights should be #
        # initialized from a normal distribution centered at 0 with standard       #
        # deviation equal to weight_scale. Biases should be initialized to zero.   #
        #                                                                          #
        # When using batch normalization, store scale and shift parameters for the #
        # first layer in gamma1 and beta1; for the second layer use gamma2 and     #
        # beta2, etc. Scale parameters should be initialized to ones and shift     #
        # parameters should be initialized to zeros.                               #
        ############################################################################
        parameters = [input_dim] + hidden_dims + [num_classes]
        lx = len(parameters)
        for i in range(1,lx):
            idw = 'W' + str(i)
            idb = 'b' + str(i)

            self.params[idw] = np.random.randn(parameters[i-1], parameters[i]) * weight_scale
            self.params[idb] = np.zeros(parameters[i])
        if self.normalization=='batchnorm':
            for i in range(1,lx-1):
                idga = 'gamma' + str(i)
                idbe = 'beta' + str(i)
                self.params[idga] = np.ones(parameters[i])
                self.params[idbe] = np.zeros(parameters[i])


        ############################################################################
        #                             END OF YOUR CODE                             #
        ############################################################################

        # When using dropout we need to pass a dropout_param dictionary to each
        # dropout layer so that the layer knows the dropout probability and the mode
        # (train / test). You can pass the same dropout_param to each dropout layer.
        self.dropout_param = {
    }
        if self.use_dropout:
            self.dropout_param = {
    'mode': 'train', 'p': dropout}
            if seed is not None:
                self.dropout_param['seed'] = seed

        # With batch normalization we need to keep track of running means and
        # variances, so we need to pass a special bn_param object to each batch
        # normalization layer. You should pass self.bn_params[0] to the forward pass
        # of the first batch normalization layer, self.bn_params[1] to the forward
        # pass of the second batch normalization layer, etc.
        self.bn_params = []
        if self.normalization=='batchnorm':
            self.bn_params = [{
    'mode': 'train'} for i in range(self.num_layers - 1)]
        if self.normalization=='layernorm':
            self.bn_params = [{
    } for i in range(self.num_layers - 1)]

        # Cast all parameters to the correct datatype
        for k, v in self.params.items():
            self.params[k] = v.astype(dtype)


    def loss(self, X, y=None):
        """
        Compute loss and gradient for the fully-connected net.

        Input / output: Same as TwoLayerNet above.
        """
        X = X.astype(self.dtype)
        mode = 'test' if y is None else 'train'

        # Set train/test mode for batchnorm params and dropout param since they
        # behave differently during training and testing.
        if self.use_dropout:
            self.dropout_param['mode'] = mode
        if self.normalization=='batchnorm':
            for bn_param in self.bn_params:
                bn_param['mode'] = mode
        scores = None
        ############################################################################
        # TODO: Implement the forward pass for the fully-connected net, computing  #
        # the class scores for X and storing them in the scores variable.          #
        #                                                                          #
        # When using dropout, you'll need to pass self.dropout_param to each       #
        # dropout forward pass.                                                    #
        #                                                                          #
        # When using batch normalization, you'll need to pass self.bn_params[0] to #
        # the forward pass for the first batch normalization layer, pass           #
        # self.bn_params[1] to the forward pass for the second batch normalization #
        # layer, etc.                                                              #
        ############################################################################
        N, D = X.shape[0], np.prod(X.shape[1:])
        scores = X.reshape(N,D)
        list_x_in = {
    }
        list_x_in['x' + str(0)] = scores
        for i in range(1,self.num_layers):
            xa, wa, ba = scores, self.params['W' + str(i)], self.params['b' + str(i)]
            if self.normalization == 'batchnorm' and self.use_dropout:
                gamma, beta = self.params['gamma' + str(i)], self.params['beta' + str(i)]
                scores, cache = affine_bn_relu_drop_forward(xa, wa, ba, gamma, beta, self.bn_params[i - 1],self.dropout_param)
            elif self.normalization == 'batchnorm':
                gamma, beta = self.params['gamma' + str(i)], self.params['beta' + str(i)]
                scores, cache = affine_bn_relu_forward(xa, wa, ba, gamma, beta, self.bn_params[i-1])
            elif self.use_dropout:
                scores, cache = affine_relu_drop_forward(xa, wa, ba, self.dropout_param)
            else:
                scores, cache = affine_relu_forward(xa, wa, ba)
            list_x_in['x'+str(i)] = cache
        #注:最后一层没用relu
        xa, wa, ba = scores, self.params['W' + str(self.num_layers)], self.params['b' + str(self.num_layers)]
        scores, cache = affine_forward(xa, wa, ba)
        list_x_in['x' + str(self.num_layers)] = cache
        ############################################################################
        #                             END OF YOUR CODE                             #
        ############################################################################


        # If test mode return early
        if mode == 'test':
            return scores

        loss, grads = 0.0, {
    }
        ############################################################################
        # TODO: Implement the backward pass for the fully-connected net. Store the #
        # loss in the loss variable and gradients in the grads dictionary. Compute #
        # data loss using softmax, and make sure that grads[k] holds the gradients #
        # for self.params[k]. Don't forget to add L2 regularization!               #
        #                                                                          #
        # When using batch/layer normalization, you don't need to regularize the scale   #
        # and shift parameters.                                                    #
        #                                                                          #
        # NOTE: To ensure that your implementation matches ours and you pass the   #
        # automated tests, make sure that your L2 regularization includes a factor #
        # of 0.5 to simplify the expression for the gradient.                      #
        ############################################################################

        loss, dout = softmax_loss(scores, y)
        for i in range(self.num_layers):
            loss = loss + 0.5*self.reg*np.sum(np.square(self.params['W'+str(i+1)]))

        cache = list_x_in['x' + str(self.num_layers)]
        dout, dw, db = affine_backward(dout, cache)
        grads['W' + str(self.num_layers)], grads['b' + str(self.num_layers)] = dw + self.reg * self.params['W' + str(self.num_layers)], db

        for id in range(self.num_layers-1,0,-1):
            cache = list_x_in['x'+str(id)]

            if self.normalization == 'batchnorm' and self.use_dropout:
                dout, dw, db, dgamma, dbeta = affine_bn_relu_drop_backward(dout, cache)
                grads['gamma' + str(id)], grads['beta' + str(id)] = dgamma, dbeta
            elif self.normalization == 'batchnorm':
                dout, dw, db, dgamma, dbeta = affine_bn_relu_backward(dout, cache)
                grads['gamma' + str(id)], grads['beta' + str(id)] = dgamma, dbeta
            elif self.use_dropout:
                # print("dout", dout.shape)
                dout, dw, db = affine_relu_drop_backward(dout, cache)
                # print("dout",dout.shape)
            else:
                dout, dw, db = affine_relu_backward(dout, cache)
            grads['W'+str(id)], grads['b'+str(id)] = dw+self.reg * self.params['W'+str(id)], db

        ############################################################################
        #                             END OF YOUR CODE                             #
        ############################################################################

        return loss, grads

optic.py

import numpy as np

"""
This file implements various first-order update rules that are commonly used
for training neural networks. Each update rule accepts current weights and the
gradient of the loss with respect to those weights and produces the next set of
weights. Each update rule has the same interface:

def update(w, dw, config=None):

Inputs:
  - w: A numpy array giving the current weights.
  - dw: A numpy array of the same shape as w giving the gradient of the
    loss with respect to w.
  - config: A dictionary containing hyperparameter values such as learning
    rate, momentum, etc. If the update rule requires caching values over many
    iterations, then config will also hold these cached values.

Returns:
  - next_w: The next point after the update.
  - config: The config dictionary to be passed to the next iteration of the
    update rule.

NOTE: For most update rules, the default learning rate will probably not
perform well; however the default values of the other hyperparameters should
work well for a variety of different problems.

For efficiency, update rules may perform in-place updates, mutating w and
setting next_w equal to w.
"""


def sgd(w, dw, config=None):
    """
    Performs vanilla stochastic gradient descent.

    config format:
    - learning_rate: Scalar learning rate.
    """
    if config is None: config = {
    }
    config.setdefault('learning_rate', 1e-2)

    w -= config['learning_rate'] * dw
    return w, config


def sgd_momentum(w, dw, config=None):
    """
    Performs stochastic gradient descent with momentum.

    config format:
    - learning_rate: Scalar learning rate.
    - momentum: Scalar between 0 and 1 giving the momentum value.
      Setting momentum = 0 reduces to sgd.
    - velocity: A numpy array of the same shape as w and dw used to store a
      moving average of the gradients.
    """
    if config is None: config = {
    }
    config.setdefault('learning_rate', 1e-2)
    config.setdefault('momentum', 0.9)
    v = config.get('velocity', np.zeros_like(w))

    next_w = None
    ###########################################################################
    # TODO: Implement the momentum update formula. Store the updated value in #
    # the next_w variable. You should also use and update the velocity v.     #
    ###########################################################################
    v = config['momentum'] * v - config['learning_rate']*dw
    next_w = w + v
    pass
    ###########################################################################
    #                             END OF YOUR CODE                            #
    ###########################################################################
    config['velocity'] = v

    return next_w, config



def rmsprop(w, dw, config=None):
    """
    Uses the RMSProp update rule, which uses a moving average of squared
    gradient values to set adaptive per-parameter learning rates.

    config format:
    - learning_rate: Scalar learning rate.
    - decay_rate: Scalar between 0 and 1 giving the decay rate for the squared
      gradient cache.
    - epsilon: Small scalar used for smoothing to avoid dividing by zero.
    - cache: Moving average of second moments of gradients.
    """
    if config is None: config = {
    }
    config.setdefault('learning_rate', 1e-2)
    config.setdefault('decay_rate', 0.99)
    config.setdefault('epsilon', 1e-8)
    config.setdefault('cache', np.zeros_like(w))

    next_w = None
    ###########################################################################
    # TODO: Implement the RMSprop update formula, storing the next value of w #
    # in the next_w variable. Don't forget to update cache value stored in    #
    # config['cache'].                                                        #
    ###########################################################################
    cache = config['decay_rate']*config['cache']+(1-config['decay_rate'])*np.square(dw)
    next_w = w - config['learning_rate'] * dw / np.sqrt(cache+config['epsilon'])
    config['cache'] = cache
    pass
    ###########################################################################
    #                             END OF YOUR CODE                            #
    ###########################################################################

    return next_w, config


def adam(w, dw, config=None):
    """
    Uses the Adam update rule, which incorporates moving averages of both the
    gradient and its square and a bias correction term.

    config format:
    - learning_rate: Scalar learning rate.
    - beta1: Decay rate for moving average of first moment of gradient.
    - beta2: Decay rate for moving average of second moment of gradient.
    - epsilon: Small scalar used for smoothing to avoid dividing by zero.
    - m: Moving average of gradient.
    - v: Moving average of squared gradient.
    - t: Iteration number.
    """
    if config is None: config = {
    }
    config.setdefault('learning_rate', 1e-3)
    config.setdefault('beta1', 0.9)
    config.setdefault('beta2', 0.999)
    config.setdefault('epsilon', 1e-8)
    config.setdefault('m', np.zeros_like(w))
    config.setdefault('v', np.zeros_like(w))
    config.setdefault('t', 0)

    next_w = None
    ###########################################################################
    # TODO: Implement the Adam update formula, storing the next value of w in #
    # the next_w variable. Don't forget to update the m, v, and t variables   #
    # stored in config.                                                       #
    #                                                                         #
    # NOTE: In order to match the reference output, please modify t _before_  #
    # using it in any calculations.                                           #
    ###########################################################################
    config['t'] = config['t'] + 1
    first_moment = config['beta1']*config['m'] + (1-config['beta1'])*dw
    second_moment = config['beta2']*config['v'] + (1-config['beta2'])*dw*dw
    first_moment_bias = first_moment / (1 - config['beta1']**config['t'])
    second_moment_bias = second_moment / (1 - config['beta2']**config['t'])
    next_w = w - config['learning_rate'] * first_moment_bias / (np.sqrt(second_moment_bias)+config['epsilon'])
    config['m'] = first_moment
    config['v'] = second_moment
    pass
    ###########################################################################
    #                             END OF YOUR CODE                            #
    ###########################################################################

    return next_w, config

版权声明:本文为博主原创文章,遵循 CC 4.0 BY-SA 版权协议,转载请附上原文出处链接和本声明。
本文链接:https://blog.csdn.net/yjf3151731373/article/details/104539760

智能推荐

图像线段检测几种方法_lines1, _, _, _ = lsd1.detect(gray_image1, 2, 2)-程序员宅基地

文章浏览阅读1.1k次。OpenCV-contrib有一个名为FastLineDetector的东西,如果它被用作LSD的替代品似乎很好。如果你有点感动,你会得到与LSD几乎相同的结果。当我将OpenCV提升到4.1.0时,LineSegmentDetector(LSD)消失了。_lines1, _, _, _ = lsd1.detect(gray_image1, 2, 2)

为什么人工智能用Python?-程序员宅基地

文章浏览阅读923次,点赞24次,收藏27次。Python 作为脚本语言,虽然上手简单、入门快,但是运行速度没有Java、C++快,应用也不够广泛,那Python到底有什么优势?来听听王道Python的主讲老师龙哥怎么说?Python被认为是一门相对容易学习和上手的编程语言,对初学者友好。其简洁的语法和清晰的代码结构使得开发人员能够更专注于解决问题而不是处理语法复杂性。这使得开发人员能够轻松地利用这些工具进行人工智能项目的开发。例如最流行的机器学习框架Scikit-learn只支持Python。

Python实例29:利用python自动创建多个Excel表格_python在excel表格内添加多张表-程序员宅基地

文章浏览阅读3k次,点赞4次,收藏19次。实例代码import xlwings as xwapp = xw.App(visible = True, add_book = False)for i in range(1, 21): workbook = app.books.add() workbook.save(f'e:\\example\\{i}班信息表.xlsx') workbook.close()app.quit()_python在excel表格内添加多张表

uni-app实现上拉加载,下拉刷新(下拉带动画)_uniapp 加载动画-程序员宅基地

文章浏览阅读5.7w次,点赞14次,收藏81次。直接代码展示了uni-app的上拉加载动画感觉还行,废话不多说了。。。1在pages.json添加允许下拉刷新{ "path":"pages/lookuser/lookuser", "style":{ "navigationBarTitleText":"用户日志", "enablePullDownRefresh": true//就是这个 }_uniapp 加载动画

常见服务器对jdk版本的支持_jdk 不支持国产服务器-程序员宅基地

文章浏览阅读2.6k次。WebSphere5.1:JDK1.4,Servelet2.3、JSP1.2WebSphere6.0:JDK1.4,Servelet2.4、JSP2.0WebSphere6.1:JDK5.0,Servelet2.4、JSP2.0Tomcat4.1:JDK1.4,Servelet2.3、JSP1.2Tomcat5.5:JDK5.0,Servelet2.4、JSP2.0_jdk 不支持国产服务器

mysql55dialect_关于数据库方言MySQLDialect、MySQL5Dialect、MySQL55Dialect、MySQL57Dialect、MySQL8Dialect之间的区别与联系...-程序员宅基地

文章浏览阅读657次。什么是Hibernate方言?Hibernate方言是用来告诉Hibernte如何对指定的数据库生成相应的SQL语句。尽管做了很多尝试去使SQL语句标准化,但是不同的数据库支持的SQL语句还是有很多不同的地方。所以Hibernate使用方言来辅助生成正确的SQL语句。①MySQLDialect②MySQL5Dialect③MySQL55Dialect④MySQL57Dialect⑤MySQL8Di..._mysql5dialect

随便推点

剑灵力士卡刀ahk_技术宅分享 剑灵召唤一键卡刀代码使用教程-程序员宅基地

文章浏览阅读1.2k次。半自动读取f左右键模式,全自动读取1,2,f左右键模式。要鼠标有侧键,没侧键的我看心情帮你们改改……毕竟帖子老是沉,很不爽下面发代码,要写入ahk。代码1:全自动识别1,2,F,左右键,鼠标侧上键启动#IfWinActive ahk_class LaunchUnrealUWindowsClientXButton1::loop{send {1}sleep 10send {f}sleep 10send..._ahk剑灵

学习MarkDown的一点学习笔记,分享给大家!_markdown分享给别人看-程序员宅基地

文章浏览阅读685次。学习MarkDown的一点学习笔记,分享给大家!在之前写了两篇博客,后来让好友看了下,也查重自身原因,找出一下几点问题文章逻辑思维不够明晰文章整体上的结构有些杂乱,不够审美标准为此在一天的时间里面简要的看了看markdown的具体语法,本着学习一回就搞透的想法,写了一篇类似于小练习的笔记,分享给大家,希望对你们有帮助我认为这个笔记的好处有一下几点百度出来的语法,全是长篇幅的介绍,过于繁琐长_markdown分享给别人看

计算机应用技术专业是属于文科生,计算机应用技术专业是文科还是理科-程序员宅基地

文章浏览阅读2.1k次。导读:本篇文章由盛启琼整理发布,主要讲的是计算机应用技术专业是文科还是理科。共有165名用户参与了本文互动,热门互动有:计算机应用技术是文科还是理科专业?计算机专业属于文科还是理科计算机应用技术专业是文科还是理科 就业方向有哪些计算机专业有哪些。。 是文科还是理科计算机应用技术专业属于文科还是理科 就业前景怎么样&nbsp,以下是计算机应用技术专业是文科还是理科的详细内容..._陕工院计算机应用技术招文科生吗

Linux 硬件时间(RTC time(2),热度飙升-程序员宅基地

文章浏览阅读229次,点赞4次,收藏6次。硬件时间,也被称为实时时钟(RTC),是指计算机主板上的一个独立于操作系统的设备,它在电源关闭甚至断电情况下也能保持运行。其功能是记录当前的日期和时间。系统时间是计算机内部使用的时间,它通常在启动时从RTC设置,然后由系统时钟进行跟踪。系统时钟是操作系统内核的一部分,可以以毫秒或纳秒级别提供精确时间。本地时间是系统时间经过时区转换后的时间。时区是根据地理位置确定的,全球分为24个时区,每个时区大约代表15度的经度。例如,北京时间是UTC+8,而伦敦时间是UTC+0。

〖产品思维训练白宝书 - 产品思维认知篇⑤〗- 学习 [产品思维] 需要做哪些准备?_产品思维学习-程序员宅基地

文章浏览阅读3.2w次,点赞45次,收藏30次。这一章节的目的是希望在正式的学习 "产品思维" 的知识点之前,让大家能够做好准备(包括心理准备与身体上的准备),整个准备过程也是参考的产品经理的闭环工作模式来设计的。_产品思维学习

uni-app 配置编译环境与动态修改manifest,2024年最新android基础面试题及答案-程序员宅基地

文章浏览阅读208次。但是这时候又碰到了一个问题。不同的网络环境,可能还需要配置不同的小程序appid。作为一个慵懒的程序员,根据不同的网络环境,手动去修改appid那是不可能的。【注意】:复制上述代码的时候,记得把备注删除。通过以上,根据不同的编译方式,可以自动的切换不同的网络环境。重要事情说三遍(可视化创建的项目,在项目的根目录下)。二、动态修改manifest.json参数。动态配置appid,本质就是要动态配置。由以上可知,需要修改的是。1、创建如下项目结构配置。3、对应网络环境域名配置。